Nandini H.

Nandini H.

An ETL Developer having Azure DataLake, ADF, AWS Redshift,Hadoop, Informatica, Unix, Teradata skills

Bengaluru , India

Experience: 7 Years

Nandini

Bengaluru , India

An ETL Developer having Azure DataLake, ADF, AWS Redshift,Hadoop, Informatica, Unix, Teradata skills

48812.4 USD / Year

  • Notice Period: Days

7 Years

Now you can Instantly Chat with Nandini!

About Me

...

Show More

Portfolio Projects

Description

Role: DW Developer July 19- May 20

Description: One of the PalmTree client has acquired three companies (Picture Head, Picture Shop and Formosa groups) specialized in film post production services. Two of these companies are based in Burbank, CA and one in London, UK. PalmTree wanted to build a data analytics system to measure post-merger performance for this client. The client was seeking to automate their data integration process and replace existing reporting methods with an Enterprise Business Intelligence Solution. The goal was to implement a data warehouse solution to support the consolidation of reporting across all their companies.

Responsibilities:

  • Requirement gathering from client companies of Palm Tree
  • Analysis, design of source/target systems.
  • Created physical data model in the Azure SQL database.
  • Use Azure Data Factory pipelines to build and automate transformation processes.
  • Created mapping and design documents.
  • Setup Alerts for process failures or data validation discrepancies.
  • Created stored procedures to build audit and control tables for automate process.
  • Created unit test cases for data validation between source and data warehouse.

Show More Show Less

Description

Description: The Lead group is a premier performance-based online marketing company specializing in Data & List Management, Email, Display and Affiliating Marketing. This project focused on revenue reporting built over their email marketing data generated through proprietary email delivery Platform-Prime Lead.

Responsibilities:

  • Worked on requirement and design documents.
  • Responsible for data load from source MySQL database to Amazon S3 bucket.
  • Analyse source data and setup Redshift warehouse for reporting purpose
  • Worked on various compression techniques, distribution and sort techniques to set-up warehouse
  • Setup ETL’s using Pentaho to load data into warehouse
  • Setup various users and assign roles to access the warehouse

Show More Show Less

Description

Description: There are two different warehouses FADW and RADW that are used for global reporting solutions. FADW is used for licensing revenue, which includes sales, royalties, advances, guarantees, forecast and licensed material/intellectual property where as RADW contains sale and inventory information at retail stores in various places. This system helped in collecting all the sales across various stores through files and stored the data in the data warehouse.

Responsibilities:

  • Working with user escalated issues and prepare analysis document for the same
  • Preparing the Change Request documents and provide estimations
  • Created Unix scripts and modified existing mappings by adding command tasks to address the user issues
  • Looking into optimization techniques to reduce the throughput time

Show More Show Less

Description

Role: ETL Developer Jan 16- Dec 17

Description: Solera Holdings is an American based company which provides risk management, asset protection software and services to automotive industry and property insurance marketplace. Purpose of this project was to retrieve information related to vehicle claims, manufacturer, OEM and non OEM parts, dealers from the front-end applications for CEE countries and loaded the same into data lake for analysis

Responsibilities:

  • Coordinate with onsite/ development/ Sourcing teams during planning & Execution.
  • Created multiple scripts to extract large sets of data from source to data lake
  • Involved on File Level validations using Unix script
  • Developed BTEQ scripts to map data from source locations to warehouse
  • Prepared:
    • Filewatcher and file mover scripts to load the data from mount location
    • Shell scripts automate load process for CEE countries into different schemas
    • Cron jobs and scheduled loads
  • Collaborated with different teams to debug data issues; created unit test documents and code review documents
  • Created design document

Show More Show Less

Description

Role: Hadoop Developer Aug 15- Dec 15

Description: The aim of this project was to make sentiment analysis on DCP data from different social network sites like Facebook, Twitter and given reporting solutions for analysis performed on different Disney characters and clients.

Responsibilities:

  • Ensured importing of data from social sites to HDFS using flume
  • Developed Hive queries to load and process data in Hadoop File System; used machine learning algorithms for classification
  • Prepared Map Reduce programs for cleansing the raw data
  • Conducted sentiment analysis on reviews of the products on the client's website

Show More Show Less