Lakshman S.

Lakshman S.

Data Engineer

Hyderabad , India

Experience: 5 Years

Lakshman

Hyderabad , India

Data Engineer

24022.1 USD / Year

  • Immediate: Available

5 Years

Now you can Instantly Chat with Lakshman!

About Me

Working experience on Big-Data technologies like Spark, Hadoop, Python, Hive, Impala, and Sqoop. Strong experience in developing ETL workflows using PySpark and optimizing Spark applications. Skilled in Python modules like PySpark, Pandas, NumPy, and...

Show More

Portfolio Projects

Description

The primary motive of this project is to bring the Trading Data into one platform which is being generated by multiple Trading Systems. Generally the source data is in system specific format. As the part of this project, we are collecting data from all the source systems, Performing data cleansing, unification and Transformation using Spark and storing the data in Data Lake (Hive Tables). The hive tables are exposed to the downstream applications who generates the Reports. The source data is also archived for the future purpose.

Technologies and Tools: Hive, Spark, Hive, Sqoop, MySql, Python, PySpark.

Responsibilities:  Developed Sqoop jobs to pull data from Different RDBMS Systems like Oracle, Teradata and MySql.  Implemented Spark frameworks to clean and transform the data using Spark Data FrameAPI.  Analyzed performance issues in spark application, optimized the spark code and resource allocation.  Enhanced the hive queries to standardize the format and improve theperformance

Show More Show Less

Description

This project is envisioned for quicker/faster on-boarding of “Unmanaged” information residing in various source systems like Sales Information and Monitoring System, Finance Directory Finance Manager etc. The plan is to leverage the Hadoop capabilities by loading the SIMS & FDFM information to HDFS and provision the same for operational (reporting) purposes.

Technologies and Tools: Hive, Spark, Python, Impala, HDFS, Control -M

Responsibilities:  Involved in requirement analysis team to understand project requirements and documentthe same.  Implemented the PySpark code to parse and process the Json data.  Developed the Hive Queries to transform the data and apply the businesslogic.  Worked with production support team in fixing the bugs reported.

Show More Show Less

Description

Treasury is a project to consolidate dashboard for monitoring, forecasting, and management of funding. This project also aims to validate the future state user interface solution for data visualization and its ability to integrate with the right technology solutions.

Technologies and Tools: Hive, Spark, Python, Impala, HDFS, Control -M

Responsibilities:  Investigated the root cause for the regular failure of Treasury project in production, prepared RCA document, proposed system enhancements, implemented the enhancements and solved the issue.  Prepared the knowledge documents for the common production issues and itssolution.  Coordinating with Dev team at client side to understand the technical issues andworking with them to resolve the issues.  Worked with Admin team in case of cluster down issue and reporting thedelay tobusiness team.

Show More Show Less