Gudipati S.

Gudipati S.

In 14+ years total,4.5 years of experience in using Hadoop,hive,2+in GCP Bigquery for dataware.

, India

Experience: 14 Years

Gudipati

In 14+ years total,4.5 years of experience in using Hadoop,hive,2+in GCP Bigquery for dataware.

33364 USD / Year

  • Notice Period: Days

14 Years

Now you can Instantly Chat with Gudipati!

About Me

  • In 14.8 years,4.5 years of experience in using Hadoop and hive for dataware housing BigData project
  • 2+ year experience in Google Cloud platform (Google cloud storage,Big Query and...
  • Attended trainings on creating dataflows using pyspark.
  • 7 years of experience in the field of Information Technology as a Teradata Developer in Data Warehousing with strong experience in all phases.
  • 12 years of experience in the field of Information Technology as a Datastage / ETL Developer and 5 years as lead in Data Warehousing with strong experience in all phases.
  • Expertise in data requirement analysis, design, development of ETL process using IBM DataStage 7.5.1, 7.5.2, 8.0.1,8.1 and 11.3 versions.

Show More

Portfolio Projects

Description

Citibank was looking to develop a reporting solution to mitigate its non-quantifiable Franchise risks and comply with the OCC(Office of the Comptroller of the currency) stipulations of having contracts level data provisioned for timely, accurate submissions. It needed the vendor to take ownership of building a single data warehouse to host a common pool of contracts, Positions and Balances, organized on an enterprise wide basis spanning all its LOB’s.

Show More Show Less

• Involved in doing POC for using Hadoop and hive technologies for ETL datastage

Contribute

• Involved in doing POC for using Hadoop and hive technologies for ETL datastage tool. • Technical Lead for this project. • Responsible for UAT and PROD support. • Involved in Jenkins code promotion.

Description

The objective of the project is to deliver a compliance data platform that enables HSBC to extract, integrate and house FCC identified data centrally for system control analytics (SAC) and management information and data governance (MI & DG) purpose, providing consistent descriptions and view of data from existing federated solutions. Primary focusing on generation of reports to provide insight and control for future operation of the business and allow presentation of fully productionalised MI and Analytics with improved data quality.

<!--[if !supportLists]-->Ø <!--[endif]-->CDP is fully secure model with data access available to respective regions/business units.

<!--[if !supportLists]-->Ø <!--[endif]-->Consistent and quality view of data from existing federated solutions

<!--[if !supportLists]-->Ø <!--[endif]-->An analytics environment with access to data in its raw format as well as normalized and dimensional data models.

<!--[if !supportLists]-->Ø <!--[endif]-->An audit controlled application architecture that meets data management strategic requirements.

At high level CDP is considered as a combination of data ingestion process, a storage layer named data lake, a management information layer named normalized area and framework components.

Show More Show Less

Description

As part of the Bell IPTV Personalized Recommendation and Enhanced Search project, the RECO Engine , which will sit in the network and integrate to Fibe TV to provide recommendation to customers, requires Bell TV subscribers’ information.

The Business Intelligence team’s work for this project is to ensure that daily change of Bell TV subscribers’ information and/or their subscriptions will be captured and the new subscribers’ information and/or their subscriptions data will be sent to the RECO Engine. BI will also produce full refresh Channel Mapping and Service Authorization daily feeds for RECO Engine.

The BI team will also generate the one-time initial load for the Bell TV subscribers’ information, Bell TV subscriptions and Media Room information from existing BVu’s tables

Show More Show Less

Description

The new iService system is designed to fortify and replace the current iService application to create a repository for Normalized EPG data.

The new iService system processes the scheduling and programming information for DTH and IPTV services and creates an EPG (Electronic Programming Guide) data for variety of target systems. The new system will receive data from multiple source systems, over a variety of protocols and data formats which are then integrated and stored in a normalized database. The data is further extracted to the files, with the file formats that are agreed between iService system and corresponding downstream systems, and are published to the agreed locations. The file transfer is also notified real time by making webservice call.

The iService2 system is redesigned to create a single all-encompassing repository of programming and scheduling information. The repository will provide the infrastructure to store the data in one database, and the capability to feed existing downstream system with the information required without changing the current file formats and delivery systems.

Show More Show Less