Now you can Instantly Chat with Kajal!
About Me
3+ years of experience in Data Warehousing and Big Data technologies. Good knowledge on Big Data technology Stack Apache Spark, Hadoop, Pig, Hive, Sqoop, Spark streaming, Apache Kafka, Cassandra, Kubernetes. Implemented ETL pipelines using Big Data t...op, Spark streaming, Apache Kafka, Cassandra, Kubernetes. Implemented ETL pipelines using Big Data technologies/Tools. Experienced in writing Spark Applications, Hive queries and Shell Scripts. Experience in building ETL pipeline in GCP environment using Big Query, Apache Spark(with Data proc Cluster)
Show MoreSkills
-
-
-
-
-
- 3 Years
Expert
-
-
-
-
- 3 Years
Advanced
-
-
- 1 Years
Intermediate
-
-
-
-
- 2 Years
Expert
-
- 1 Years
Beginner
-
- 1 Years
Beginner
-
-
-
-
-
- 3 Years
Expert
-
- 2 Years
Advanced
-
-
-
-
- 1 Years
Intermediate
-
-
-
-
-
-
-
- 1 Years
Intermediate
-
- 1 Years
Beginner
-
-
-
- 3 Years
Intermediate
-
-
- 2 Years
Advanced
-
- 4 Years
Expert
-
-
- 2 Years
Expert
-
- 1 Years
Intermediate
-
-
-
- 2 Years
Expert
-
- 2 Years
Intermediate
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- 1 Years
Intermediate
-
- 2 Years
Expert
-
- 1 Years
Beginner
-
Positions
Portfolio Projects
Description
Creating Real time and distributed pipeline for Sleepiz India Using Cassandra as Database dump. Fetching real-time data from devices generating values from electromagnetic waves and processing it further for Digital signal processing and Machine learning programs. Fetching data for UI presentation to customers who are doctors.
Show More Show LessDescription
Understand the requirement and creating a pipeline for ETL based on client use case. Working on a framework for bulk ingestion which also handled multiple scenarios like PII tables handling ,ingestion of tables with or without specific column etc. Automation of the whole process of ingestion and validation of data according to client requirment. Worked on data transformation and data handling of ingested data into the raw layer.
Show More Show Less