Sumit kumar S.

Sumit kumar S.

Cloud Architect

Lalkuan , India

Experience: 6 Years

Sumit kumar

Lalkuan , India

Cloud Architect

54365 USD / Year

  • Immediate: Available

6 Years

Now you can Instantly Chat with Sumit kumar!

About Me

    • 8+ years of experience in software Developing, Debugging and process improvement.
    • Experienced in Python, PySpark ,MySQL ,hadoop,hive,sqoop etc.
    • Experience...
    • Experienced in AWS services like Glue/Redshift/S3/Athena/BOTO3/EC2.
    • Experienced in data analysis and using  dataframes through SparkSQL.
    • Experienced in working with Spark RDDs and Map/Reduce functions using PySpark.
    • Experienced in data migration to AWS S3 using pyspark and hive from SAS9.2 as well as from S3 to redshift.
    • Good knowledge in Relational Database model.
    • Experienced in working with various Python Integrated Development Environments like IDLE and jupyter-python.
    • Experienced in handling large volume of data and data migration/export.
    • Played role of Data Analyst and Data profiling activities to established strong communication between all the project stake holders.
    • Possess good analytical and debugging skill.
    • Experience of working at client place, work orientation according to client requirements and facing /handling clients on regular basis.
    • Worked regularly on Tables, Procedures, Packages, Functions, Collections, and Shell scripting and server management.
    • Strengths include ability in meeting deadlines and deliverables while achieving      excellence and quality of work, quick learning of new concepts & techniques.
    • Dedicated to produce professional work with highest quality and creativity.
    • Strong understanding of the Software work environment with good analytical skills.
    • Possess excellent debugging skills .
    • Experienced in Bulk Data migration  through unix shell scripting .
    • Developed and enhanced PL/SQL programs and Unix Shell scripts

Show More

Portfolio Projects

Description

Responsibilities:

  • Read and analyze data using Pyspark.

  • File Handling using PySpark and creating dataframes from file.

  • Working with SparkSQL and write interactive queries in dataframes .

  • Performed transformations and actions on Spark RDDs using PySpark.

  • Exposure in Map/Reduce functions through PySpark.

  • Developing and maintaining PL/SQL Functions/Procedures.

  • Using PL/SQL analytical functions for data analysis and creating reports.

  • Involved in requirement gathering, understanding current system.

  • Responsible to make a data validation tool to validate tables from sas and hive.

  • Using operations of pyspark and fixing the code according to condition.

  • Converting sas codes into pyspark codes.

  • Data viualisation and building and querying tables using AWS Athena and maintaining Athena objects in staging and mart,detail etc.

  • Creating tables in hive,Using athena and aws services.

  • Analyzing data to identify issue of data error.

Show More Show Less

Description

Responsibilities:

  • Read and analyze data using pandas ,BOTO3 and s3.

  • File Handling using Pyandas and creating dataframes from file.

  • Working with SQL and write interactive queries in dataframes .

  • Developing and maintaining PL/SQL Functions/Procedures.

  • Using PL/SQL analytical functions for data analysis and creating reports.

  • Involved in requirement gathering, understanding current system.

  • Responsible to make data pipelines to validate tables from s3.

  • Using operations of python and aws services and fixing the code according to condition.

  • Data viualisation and building and querying tables using AWS redshift and maintaining redshift objects in and mart,detail etc.

  • Creating tables in aws redshift.

  • Analyzing data to identify issue of data error.

Show More Show Less

Description

Responsibilities:

  • Read and analyze data using Pyspark in AWS Glue job.

  • Data extraction,Transformation and load using pyspark and sparksql in AWS Glue.

  • Handling and managing large dataframes.

  • Working with SparkSQL and write interactive queries in dataframes .

  • Performed transformations and actions on Spark RDDs using PySpark.

  • Exposure in Map/Reduce functions through PySpark.

  • Using Python Pandas library to handle file validations and transformations in python windows environment.

  • Data viualisation and building and querying tables using AWS Athena and maintaining Athena objects in staging and mart.


Show More Show Less