SRAVAN K.

SRAVAN K.

LEAD BIG DATA ENGINEER

HYDERABAD , India

Experience: 13 Years

SRAVAN

HYDERABAD , India

LEAD BIG DATA ENGINEER

53382.4 USD / Year

  • Notice Period: Days

13 Years

Now you can Instantly Chat with SRAVAN!

About Me

13+ years of experience into Big Data, Azure Cloud, Machine Learning, Talend ETL and Java development. Developed Big data analytical modules based on Hadoop & Spark. Architecting and leading end to end analytical modules for production scale deployme...

Show More

Portfolio Projects

Description

Pre-Processing: Integrating multiple domains like demographic, Risk scores, Hospital visits, Quality, Claim, etc. Performed imputation, Data Binning, Data Type conversions, etc. Exploratory Analysis: Outlier analysis, Statistical test to identify correlation, etc. Models: GBM, Logistic Regression Achieved 75% of accuracy with minimal number of features. Identified the significant variables which helped in defining Care Modality Rules. Abstract : This is a analytical module to predict is the PAF gets returned successfully or not. This will save expenses by avoiding PAF deployment for members who may not get reachable.

Show More Show Less

Description

Migrated the complex analytical engine from SAS to Big Data in short span of time. Achieved the best performance such that whole process will be done in 8 hours for 125 million claims. Accurately prioritized the charts based on the criticality of chart. Abstract : It is analytical module to retrieve data from multiple sources and prioritize clinical charts along with derived information like scores, specialty, etc. which are needed for downstream.

Show More Show Less

Description

Abstract: It is analytical module to retrieve data from multiple sources and prioritize clinical charts along with derivedinformation like scores, specialty, etc. which are needed for downstream.
Technologies:
PySpark, Java, Oozie, Hive, Sqoop, Azure HDInsight and Azure Storage

Migrated the complex analytical engine from SAS to Big Data in short span of time.
Achieved the best performance such that whole process will be done in 8 hours for 125 million claims.
Accurately prioritized the charts based on the criticality of chart.

Show More Show Less

Description

Abstract :
Configurable Framework to evaluate the quality of the input and output data. It generates the Report with the list ofchecks Failed/Passed. It supports around 30 quality checks on files of multiple formats.
Technologies:
Spark, Java, Shell and Oozie.
Achievements:

Saved the cost of Alteryx license which was used for Data Quality before.
Expanded the tool usage to multiple teams across organization.

Show More Show Less

Description

Abstract
: It is analytical module to retrieve data from multiple sources and prioritize clinical charts along with derivedinformation like scores, specialty, etc. which are needed for downstream.
Technologies:
PySpark, Java, Oozie, Hive, Sqoop, Azure HDInsight and Azure Storage
Achievements:

Migrated the complex analytical engine from SAS to Big Data in short span of time.
Achieved the best performance such that whole process will be done in 8 hours for 125 million claims.
Accurately prioritized the charts based on the criticality of chart.

Show More Show Less

Description

Text Mining on tickets to perform text classification. Applied with word embedding and fed to CNNs to determine the probable category. Abstract : Revvy Bot is an intelligent Ticket Classification system designed to classify tickets to specific Category. This solutions is developed in Python using the Keras deep learning library.

Show More Show Less

Description

Revenue Cloud is an enterprise application to handle revenue management in US life science industry. We are responsible for platform development which will inherited by application development teams.

Show More Show Less

Description

Data sync from Oracle to AWS S3 is performed using SQOOP. Sequential queries will be executed using Thrift Server which is a JDBC layer on Spark. Abstract : This solution is implemented to bring down the execution time of Workbook calculation using Big data technologies

Show More Show Less

Description

It is middleware developed using an ETL tool to Integrate SAP R/3 System with WorkDay HR.

Show More Show Less

Description

Payroll engine to calculate the payroll and generates the pay checks for US payroll.

Show More Show Less

Description

Abstract :
It is an analytical module to recommend a member to a appropriate assessment program among differentassessment programs available like In-Office Assessment (IOA), In-Home Assessment (IHA) etc. This is based on predefinedrules across multiple domains.
Technologies:
PySpark, Java, Oozie, Hive, Shell, Azure Storage, HDInsight and Kafka
Achievements:Able to
integrate this process to the existing analytical structure and achieved very good performance (20-30mins for8million members).
Migrating the module to Azure.

Show More Show Less