Rajat G.

Rajat G.

Product engineer/Data Scientist

Delhi , India

Experience: 7 Years

Rajat

Delhi , India

Product engineer/Data Scientist

4000000 USD / Year

  • Immediate: Available

7 Years

Now you can Instantly Chat with Rajat!

About Me

I'm having 6+ years of experience as a Data Scientist. Worked on Machine Learning, Deep Learning algorithms, Hadoop (Data Engineer), Python and Java projects. Freelancer experience is around 2 years.

I have expertise in working with Sp...

I have expertise in working with Spark, Machine Learning algorithms, H2O.ai, Hadoop, deep neural networks and chatbots. I have worked with Fortune 500 Companies around the world like Citibank, Vodafone, Ericsson, Wipro, Amgen and many more international Clients during my career.
 

Show More

Skills

Mobile Apps

Portfolio Projects

Customer Care IVR Automation

Company

Customer Care IVR Automation

Role

Backend Developer

Description

Description: Automate the current IVR process to remove dependency on Customer Care Executive
Tech Stack: Confluent Kafka, Hive, Spark, Python, Machine Learning, IBM RTC, Jenkins
• Handled Data Ingestion tool Kafka for real time processing of the data with Spark Streaming for large scale data processing
• Integrated multiple data sources via Kafka Data Ingestion tool
• Played a key role in transformation by Spark scripts for data transformation, mapping, standardization & data characterization
• Deployed various transformations by Spark scripts to extract meaning and value from structured & semi-structured data
• Performed analysis on data by implementing various machine learning algorithms in Python
• Improved algorithms by deploying best hyperparameter, deployed GridSearchCV for tuning

Show More Show Less

Vodafone database migration project

Company

Vodafone database migration project

Role

Backend Developer

Description

Description: To create datalake which shall be the single & comprehensive source of information to improve decision making
Tech Stack: Sqoop, PySpark, Python, AWS(EMR)
• Setup environment on AWS EMR for development purposes
• Oversaw data extraction from Charging System nodes Oracle Exadata to HDFS using Data ingestion tool Sqoop
• Played a key role in writing the Python scripts
• Performed analysis on data by Spark scripts to extract meaning and value from structured data

Show More Show Less

Ericsson

Company

Ericsson

Role

Backend Developer

Description

Description: Constant update in tariff raises persistent need to track the impact on ROI which determines end user satisfaction
Tech Stack: Docker, Sqoop, Hive, PySpark, Airflow, Tableau, AWS Redshift
• Setup environment on Docker for development purposes
• Played a key role in writing the Data ingestion Sqoop scripts from MySQL database to HDFS
• Performed analysis on data by Spark scripts to calculate traditional & ad-hoc KPI’s from structured & unstructured data
• Managed Data ingestion using Sqoop, maintenance, cleaning, and manipulation of data using Spark scripts
• Exported processed data to AWS Redshift and prepared Tableau dashboards for management.

Show More Show Less

New datalake using Spark and prediction using ML

Company

New datalake using Spark and prediction using ML

Role

Data Scientist

Description

• Implementing new ingestion pipelines and features in Spark to extract meaningful value from semi-structured data
• Tuning performance optimization for existing Spark data pipelines
• Migrating multiple data sources via AWS DMS
• Setup of data warehouse using Amazon Redshift, creating Redshift clusters and perform data analysis queries
• Migrating previous on-premise RDBMS data sources into AWS storage (S3)
• Performed analysis on data by implementing various machine learning and deep learning algorithms

Show More Show Less

Tools

PyCharm
Share: