Abhishek K.

Abhishek K.

Data Engineer

Bengaluru , India

Experience: 6 Years

Abhishek

Bengaluru , India

Data Engineer

69397.1 USD / Year

  • Immediate: Available

6 Years

Now you can Instantly Chat with Abhishek!

About Me

Abhishek has been working in the IT industry from past 6 years. He has worked on technologies including data engineer, python, django,flask, machine learning, pandas , numpy, pyspark, docker. He is curious about learning new technologies. He is a...

Show More

Portfolio Projects

Description

The project involved creating scripts for monitoring and validating the services such as kubernetes.

Show More Show Less

Description

Data Capital Management (US based):

The project involved converting the standalone architecture to a microservice architecture using python and AWS services. The client’s domain was Investment strategy.

Show More Show Less

Description

The project involved creating a full-stack application with backend as Django rest framework and frontend with Nextjs which is a React.js framework for production level and deploying it over AWS.

Understanding the requirements for the current business problem as well as the upstream application and building the applicationto visualize data and configuration related to project, endpoint and plugin information present in AWS and AZURE based on tags

Show More Show Less

Description

The project involved creating a real time data streaming platform by using Pyspark and also using the serverless function and deploying them over the EKS cluster. Working as a data engineer to build the pipeline for ETL process.

Responsibilities:

● Understanding the requirements for the current business problem and applying it over Pyspark for a near real time data streaming platform for medical devices and sensor data.

● Co-ordinating with mobile team, frontend team to make the application work flawlessly.

● Worked over data science rule-based algorithms regarding scheduling and alerting the medical professional over increased

threshold.

● Created the data pipeline for the new sensors onboarding the SAAS platform and integrating them.

● Optimized the throughput of the application.

Show More Show Less

Description

Project Description: The project involved working with the data scientists and building scalable applications. Responsibilities: ● Understanding the requirements for the current business problems and applying it via Python and R and deploying it as an API with the help of Django and RPlumber in Docker. ● Creating the mock unit test frameworks in python as a part of TDD. ● Worked on time-series analysis using R library for finding the anomalies and deploying with RPlumber.

Show More Show Less

Description

The project involved creating the product GDPR as a service which involved data engineering and exposing it via REST APIs. Responsibilities: ● Worked on Pyspark for data engineering work. ● Building REST APIs via Django rest framework and documenting it via swagger. ● Applied SMOTE, K-Means clustering and Random Forest Regressor and Classifier to the business use case in finding anomalies and for recommending values.

Show More Show Less

Description

Project Description: Data Intelligent Suite (DIS) is a tool based on flask framework which is used by data scientists for the ease of use of finding anomalies. The service was exposed as a REST API service. Responsibilities: ● Developing the flask framework and was involved in the development of applied machine learning algorithms - One-SVM, Random Forest, xgboost, LOF, ISO-Forest for anomaly detection. ● Worked on statistical methods – IQR, MOD-Z. ● Worked on spacy for NER tagging and NLTK for pre-processing and building models to recognize the keywords with data extracted from pdf as a source. ● Deploying the solution made, in Docker. ● Created an automation script for CRUD operation and simplifying the work of HBase for other developers and non-developers. ● Responsible for adding the new functionalities to the application

Show More Show Less