Prashanth Reddy P.

Prashanth Reddy P.

Data Engineer

Hyderabad , India

Experience: 5 Years

Prashanth Reddy

Hyderabad , India

Data Engineer

333640 USD / Year

  • Notice Period: Days

5 Years

Now you can Instantly Chat with Prashanth Reddy!

About Me

3+ years of experience in Hadoop and Peoplesoft. Industry experience includes projects for Manufacturing and Banking Domain. Proficient in Big Data Technologies Hive, Hdfs, Impala, Spark and Sqoop. Knowledge in writing complex SQL queries. Experience...

Show More

Portfolio Projects

Description

Healthcare domain

Show More Show Less

Description

I have worked on retail domain ,especially handling supply chain data from different sources as well order details .

Show More Show Less

Description

Position

Data Engineer

Environment

Hadoop Ecosystem (HDFS, HIVE, SQOOP, IMPALA, SPARK,SCALA)

Overview:

Supply-chain management, techniques with the aim of coordinating all parts of SC from supplying raw materials to delivering and/or resumption of products, tries to minimize total costs with respect to existing conflicts among the chain partners. An example of these conflicts is the interrelation between the sale department desiring to have higher inventory levels to fulfill demands and the warehouse for which lower inventories are desired to reduce holding costs

Roles & Responsibilities:

  • Importing and exporting the data from relational databases, Using SQOOP.
  • Involving in creating Hive tables, loading with data and writing hive queries.
  • Used Hive to analyse the partitioned and compute various metrics for reporting.
  • Worked on Custom build tool Automic for scheduling.
  • Using Spark-SQL to load different formats of data and create schema RDD and loaded into Hive Tables and handled structured data using Spark SQL and Data frame.
  • Worked on Spark and Kafka.
  • Worked on GitHub.

Technologies: Hive, Sqoop, Oracle, Hdfs ,Impala, Spark.

Show More Show Less

Description

Position

Hadoop Developer

Environment

Hadoop Ecosystem (HDFS, HIVE, SQOOP, IMPALA, SPARK)

Overview:

Microsoft Corporation is an American multinational technology company with headquarters in Redmond, Washington. It develops, manufactures, licenses, supports, and sells computer software, consumer electronics, personal computers, and related services.

Roles & Responsibilities:

  • Importing the data from relational databases, Using Spark.
  • Involving in creating Hive tables, loading with data and writing hive queries.
  • Used Hive to analyse the partitioned and compute various metrics for reporting.
  • Used hive optimization techniques during joins and best practices in writing hive scripts using HIVEQL.
  • Using Spark-SQL to load different formats of data and create schema RDD and loaded into Hive Tables and handled structured data using Spark SQL.
  • Worked on Spark-SQL on top of Hive.

Show More Show Less

Description

Position

Hadoop Developer

Environment

Hadoop Ecosystem (HDFS, HIVE, SQOOP, IMPALA, SPARK)

Overview:

The Commonwealth Bank of Australia is an Australian multinational bank with businesses across New Zealand, Asia, the United States and the United Kingdom. It provides a variety of financial services including retail, business and institutional banking, funds management, superannuation, insurance, investment and broking services. The Commonwealth Bank is the largest Australian listed company on the Australian Securities Exchange as of August 2015 with brands including Bank west, Colonial First State Investments, ASB Bank (New Zealand), Commonwealth Securities (CommSec) and Commonwealth Insurance (CommInsure). Commonwealth Bank is also the largest bank in the Southern Hemisphere

Roles & Responsibilities:

  • Importing and exporting the data from relational databases, Using SQOOP.
  • Involving in creating Hive tables, loading with data and writing hive queries.
  • Used Hive to analyse the partitioned and compute various metrics for reporting.
  • Used hive optimization techniques during joins and best practices in writing hive scripts using HIVEQL.
  • Importing and exporting data into HDFS and Hive using Sqoop.
  • Using Spark-SQL to load different formats of data and create schema RDD and loaded into Hive Tables and handled structured data using Spark SQL.
  • Created Impala tables and used Impala queries for creating reports based on requirement
  • Interactive analysis of Hive tables through various data frame operations using SparkSQL.

Show More Show Less

Description

Position

Technical Consultant

Environment

People Soft 9.1, PeopleTools 8.53

Overview:

  • The World Bank is an international financial institution that provides loans to developing countries for capital programs.
  • The World Bank's official goal is the reduction of poverty.
  • HCL provided me the opportunity to work with World Bank Client as a PeopleSoft Technical Resource.

Roles & Responsibilities:

  • Core HR, Position Management Simplification.
  • Have gone through the application functionality and understood the business process and shared the knowledge to the team members.
  • Analyse and resolve problem tickets in a timely manner.
  • Prepared and executed test cases as per system requirements.
  • Proficient in understanding Software Requirement Specification and identifying the required test scenarios.
  • Provide technical support in design, development, testing, and deployment of PeopleSoft applications..
  • To coordinate with Technical Leads and functional users for requirement understanding.
  • Analysing and fixing the bugs in the existing HCM system.
  • Maintain project technical documentations for management review.
  • Migration of objects & coordinating the go-live activities.

Show More Show Less