Nandini H.

Nandini H.

Manager

Bangalore , India

Experience: 7 Years

Nandini

Bangalore , India

Manager

48812.4 USD / Year

  • Notice Period: Days

7 Years

Now you can Instantly Chat with Nandini!

About Me

9.8 Years of experience in developing Data Integration solutions, Data Management, and implementation of Enterprise Business Intelligence Applications Worked on moderate complex projects by providing end -to -end solutions in a team of 3-25 people. P...

Show More

Portfolio Projects

Description

SonarQube is a tool integrated with repos to maintain coding standards in data brick notebooks. All the pipelines are integrated with SonarQube pipeline runs to identify the code smells/bugs in notebooks. If any code smells/bugs are identified, they should be corrected before merging with main branches and deploying the code to other environments.

Show More Show Less

Description

As part of an IRM activity, the goal of this project is to enable log analytics for different services like Azure data lake, ADF, ADB, SQL DB, Event hubs, Logic Apps and Key vaults in all environments on MDH platform.

Show More Show Less

Description

The Worldwide Market Data (WWMD) is a global Market Measurement application that delivers official P&G shares at the global, country and category level. It presents data to the following people regarding P&G and its competitors performance (Market Share and Market Size) in selected product categories: P&G CEO(s) Board of Directors General Managers Business Analysts The objective of Project Stratus is to migrate the on premise WWMD application to Azure stack

Show More Show Less

Description

Customer NOS & Gross Contribution are derived facts using multiple upstream data types. NOS is an indicator of Revenue/Sales generated and Gross Contribution (GC) is an indicator for Profit. GC is official Performance KPI for Sales function. Facts are calculated on monthly basis for reporting and downstream applications and Master Data (customer/product) is maintained monthly or quarterly. C-NOS & GC Data Service enables granular visibility to NOS and GC, in tax compliant way for Monthly Reporting, Business Planning, and Post Event Analytics (PEA).

Show More Show Less

Description

DPSS consistently acquires large streams of data from multiple online transactional systems. Most of this data is sourced from the DPSS Eligibility System. IHSS Program Overview: The In-Home Supportive Services (IHSS) program provides in-home assistance to eligible aged, blind, and disabled individuals as an alternative to out-of-home care and enables recipients to remain safely in their own homes. Care providers and recipients will sign the time sheets and submit them to the county to process payments through the statewide Case Management, Information, and Payrolling System (CMIPS). Since the existing system is 12 years old and reports were built using Cognos for which there is no support from the client, DPSS team want to build Enterprise data warehouse and Business Intelligence system using cloud technology.

Show More Show Less

Description

Role: DW Developer July 19- May 20

Description: One of the PalmTree client has acquired three companies (Picture Head, Picture Shop and Formosa groups) specialized in film post production services. Two of these companies are based in Burbank, CA and one in London, UK. PalmTree wanted to build a data analytics system to measure post-merger performance for this client. The client was seeking to automate their data integration process and replace existing reporting methods with an Enterprise Business Intelligence Solution. The goal was to implement a data warehouse solution to support the consolidation of reporting across all their companies.

Responsibilities:

  • Requirement gathering from client companies of Palm Tree
  • Analysis, design of source/target systems.
  • Created physical data model in the Azure SQL database.
  • Use Azure Data Factory pipelines to build and automate transformation processes.
  • Created mapping and design documents.
  • Setup Alerts for process failures or data validation discrepancies.
  • Created stored procedures to build audit and control tables for automate process.
  • Created unit test cases for data validation between source and data warehouse.

Show More Show Less

Description

Description: The Lead group is a premier performance-based online marketing company specializing in Data & List Management, Email, Display and Affiliating Marketing. This project focused on revenue reporting built over their email marketing data generated through proprietary email delivery Platform-Prime Lead.

Responsibilities:

  • Worked on requirement and design documents.
  • Responsible for data load from source MySQL database to Amazon S3 bucket.
  • Analyse source data and setup Redshift warehouse for reporting purpose
  • Worked on various compression techniques, distribution and sort techniques to set-up warehouse
  • Setup ETL’s using Pentaho to load data into warehouse
  • Setup various users and assign roles to access the warehouse

Show More Show Less

Description

Description: There are two different warehouses FADW and RADW that are used for global reporting solutions. FADW is used for licensing revenue, which includes sales, royalties, advances, guarantees, forecast and licensed material/intellectual property where as RADW contains sale and inventory information at retail stores in various places. This system helped in collecting all the sales across various stores through files and stored the data in the data warehouse.

Responsibilities:

  • Working with user escalated issues and prepare analysis document for the same
  • Preparing the Change Request documents and provide estimations
  • Created Unix scripts and modified existing mappings by adding command tasks to address the user issues
  • Looking into optimization techniques to reduce the throughput time

Show More Show Less

Description

Role: ETL Developer Jan 16- Dec 17

Description: Solera Holdings is an American based company which provides risk management, asset protection software and services to automotive industry and property insurance marketplace. Purpose of this project was to retrieve information related to vehicle claims, manufacturer, OEM and non OEM parts, dealers from the front-end applications for CEE countries and loaded the same into data lake for analysis

Responsibilities:

  • Coordinate with onsite/ development/ Sourcing teams during planning & Execution.
  • Created multiple scripts to extract large sets of data from source to data lake
  • Involved on File Level validations using Unix script
  • Developed BTEQ scripts to map data from source locations to warehouse
  • Prepared:
    • Filewatcher and file mover scripts to load the data from mount location
    • Shell scripts automate load process for CEE countries into different schemas
    • Cron jobs and scheduled loads
  • Collaborated with different teams to debug data issues; created unit test documents and code review documents
  • Created design document

Show More Show Less

Description

Role: Hadoop Developer Aug 15- Dec 15

Description: The aim of this project was to make sentiment analysis on DCP data from different social network sites like Facebook, Twitter and given reporting solutions for analysis performed on different Disney characters and clients.

Responsibilities:

  • Ensured importing of data from social sites to HDFS using flume
  • Developed Hive queries to load and process data in Hadoop File System; used machine learning algorithms for classification
  • Prepared Map Reduce programs for cleansing the raw data
  • Conducted sentiment analysis on reviews of the products on the client's website

Show More Show Less

Description

This project was designed to replace three disparate systems and deliver a global reporting solution for licensing revenue, which includes sales, royalties, advances, guarantees, forecast and licensed material/intellectual property. The system was utilized globally with a user base in the excess of 350 users. System was designed and developed for high availability and business continuity. The scope of this change request solved an attribute issue which was faced due to data publishing from different systems.

Show More Show Less

Description

This data warehouse worked towards DCP (Disney Consumer Product) sale and inventory information at retail stores in various places. This system helped in collecting all the sales across various stores through files and stored the data in the data warehouse. This warehouse contained the retailer, product, location and date dimensions. Populated the fact tables using the information acquired from the files; used Informatica ETL to populate the data, based on the business rules generated for rejecting file of every load. The goal was to implement change requests related to DCP clients.

Show More Show Less