Rajananda Prabhu A.

Rajananda Prabhu A.

Hadoop Developer / Data Engineer

Bangalore , India

Experience: 14 Years

Rajananda Prabhu

Bangalore , India

Hadoop Developer / Data Engineer

57600 USD / Year

  • Immediate: Available

14 Years

Now you can Instantly Chat with Rajananda Prabhu!

About Me

14 Years of IT Experience as Hadoop Developer, Data Engineer, SQL developer, SQL and No SQL data modeler, Application Analysis Development and Support with complete Software Life Cycle in Client Server Applications. Experience in Big Data technologie...

Show More

Portfolio Projects

Description

Daimler is one of the biggest producers of premium cars and the worlds biggest manufacturer of commercial vehicles. They provide financing, leasing, insurance and innovative mobility services. Daimlers warranty data has been stored in Advanced Quality Analysis (AQUA) System. Users from many department use this data from variety of tools. Daimler consolidate the quality related data on Teradata and in Horton Works Hadoop platform for Analysis.

Show More Show Less

Description

C3BL is a warehousing system of Cisco Customer care Center OLTP system. This system take care of the process of Service Order/ Service request and task. The primary source system of C3BL is Cisco customer care (c3) and EDW (Enterprise Data warehouse). Initially the data will comes to staging area from above 2 systems and subsequently specific set of business rules are applied in these staging area and loaded into Reporting DB. Due to the increase in the Volume of data day by day, client decided to migrate from informatica to Spark framework to handle ETL activities.

Show More Show Less

Description

GMAS System is a system that implements an IT solution and business process to support effective management of maritime risk, taking account of both asset (vessel/terminal) eligibility and compatibility, to comply with the Transport Manual – Maritime Safety requirements. Shell also acts as a central repository of asset data, and also current position of vessels. This data act as outbound interfaces to other system. As data in the system is growing year after year, there is a need to implementing disposal / purge plan at record level based on the country rules and regulations. For the purpose we have partitioned (range & reference) the GMAS tables (80+) in different partitions so that the data resides in them and the partition can be dropped when required.

Show More Show Less

Description

T-Mobile is the 3rd largest wireless network operator in the United States. They provide mobile service, Netflix, Data plans, Banking cards, T-Vision Home etc. in US.

The financial transactions, Marketing data, customer port in / out data is stored in EDS – Enterprise Data Solution System. EDS receive data in the form of Files, pull data from database sources using Sqoop, Streaming data from Golden Gate and Tibco JMS to EDS Hadoop Data Lake. RAW folder. The Spark jobs process the data and load it into HIVE, a copy of raw data is stored in Hbase table and Summary of data is loaded into Teradata.

Environment: Hortonworks Hadoop, RHEL, Spark, Spark SQL, Hive, HiveQL, Teradata, HDFS, SQOOP.

Responsibilities

  • Working with onsite team and coordinating with BA’s develop the business requirement document and data flow document.

ü Designing the Workflows for Jobs.

ü Loading data from different sources into Data frames and loading it into HIVE tables.

ü Developing scripts using SparkSQL for data aggregation and loading the processed data into Teradata tables, generate the reports in the landing area of the Edge node.

ü Designing Hive tables using right strategies.

ü Troubleshooting issues with jobs and coordinating dev team to fix them.

ü Preparing Hive scripts to process data on HDFS & HIVE and extract the summarized information from Hive.

  • Process and analyze the data from HIVE tables using SparkSQL and HiveQL.

Show More Show Less

Description

Daimler is one of the biggest producers of premium cars and the world's biggest manufacturer of commercial vehicles. They provide financing, leasing, insurance and innovative mobility services.

Daimler’s warranty data has been stored in Advanced Quality Analysis (AQUA) System. Users from many department use this data from variety of tools. Daimler consolidate the quality related data on Teradata and in Horton Works Hadoop platform for Analysis.

Environment: Hortonworks Hadoop, Linux, Teradata, HDFS, Hive, SQOOP.

Responsibilities:

  • Working with onsite team to understanding the business requirement.

ü Data modelling Hive tables (External & Managed), creating Views, Indexes, partitioning & bucketing tables migrating the data from different sources.

ü Helping the team to understanding the Data model and system architecture.

ü Loading the data from Teradata to Hive using SQOOP, which is used by Client Data Scientist team.

ü Processing and analyzing the data from Hive tables using HiveQL.

ü Performance tuning of Hive and Spark queries.

ü Trouble shooting the issues with jobs and coordinating with dev team to fix them.

ü Prepared / updated of System Maintenance & Technical doc (SMTD) and other technical documents.

  • Loading data files to HDFS & Hive, creation of Hive managed & external table, Indexing, Partitioning, Bucketing and loading data, hive performance tuning.

Show More Show Less

Description

C3BL is a warehousing system of Cisco Customer care Center OLTP system. This system take care of the process of Service Order/ Service request and task. The primary source system of C3BL is Cisco customer care (c3) and EDW (Enterprise Data warehouse). Initially the data will comes to staging area from above 2 systems and subsequently specific set of business rules are applied in these staging area and loaded into Reporting DB. Due to the increase in the Volume of data day by day, client decided to migrate from informatica to Spark framework to handle ETL activities.

Environment: Hadoop, Spark, SparkSQL, Scala, Hive1.1 and Oracle11c.

Responsibilities:

  • Design the workflows for Jobs.
  • Working with onsite-Team and coordinating with BA’s to Develop BRD and Dataflow documents.
  • Loading data from different sources in Data frames and loaded it into Hive Tables.
  • Develop data transformation scripts using Sqoop to apply business rules and load the processed data into the destination tables.
  • Data modelling using right strategies for hive tables.
  • Troubleshooting issues with jobs and coordinating with dev team to fix them.
  • Created Hive scripts to process data on Hdfs and Hbase.
  • Process and analyze the data from Hive tables using HiveQL
  • Peer reviews and monitoring the task updates in project.

Show More Show Less

Description

As part of COE we have implemented various application of client in MongoDB / NoSQL DB. Some of the major applications are Operational data store (ODS) is a system designed to integrate data from multiple sources for additional operations. When the golden source is down for scheduled maintenance customer care gets lot of calls for non-availability of account balance, transaction count, transaction details etc. So on non- availability of golden source the web and mobile applications will be using this system to provide the information. Â Make business special (MBS) is a system which enabling a business to create invoices for customer and record bills. It provides complete approach to run the business. It also shows you a new performance and insights led view. Migrated the data from Teradata to Hive using SQOOP in ChangeIT application.

Environment: MongoDB, Linux, Oracle, Hadoop, HDFS, Hive, SQOOP.

Responsibilities:

  • Responsible for Database Architecture, Design and ER Modeling for all in house projects.

ü High Level documenting design decisions and ensuring adherences with existing process guidelines

ü Participate in implementation plan for new or significant enhancements to the Databases Design

ü Implementing the Best Practices for database like proactive problem identification.

ü Work with technical and users to understand business requirements and identify data solutions.

ü Analyzing the requirements & preparing technical design docs.

ü Migrating data from RDBMS to MongoDB.

ü Migrating data from Teradata to hive table using SQOOP.

ü Designing data model, implementing sharding and indexing strategies for huge data sets.

ü Optimizing MongoDB CRUD operations.

ü Migrating data from/to MongoDB using mongo import and mongo export

  • Conducting internal and external interview.

Show More Show Less

Description

Project: JAWS/REEF - Inventory Management System for SBL

The purpose of this project is to keep collateral / outright trade positions and availability positions and to keep static reference data. Reef is a system that holds static reference data like Equities / Fixed income securities (symbols like ISIN, Sedol, Cusip etc.,), counterparty details, security prices, benchmark rates, Index constituents, average traded volumes of securities etc.,. These static data are sourced through feeds from internal systems. (Worked onsite in London, UK for 16 months). Jaws is a consolidated data store for the following.

  • Collateral Trades as feeds from external trading systems such as Global One & Loanet. Equity stock borrows & lending positions are captured from the feeds and stored.
  • Outright Trades from settlement systems such as T24/SSE/ADP – Summarized instrument positions from all trading grouped by cost centers & depots.
  • Internal availabilities as feeds from Zurich pools and external availabilities are received through emails from third party institutions like banks.

Environment: UNIX, Sybase ASE, Sybase Power designer, Perl, Shell scripting, Clearcase, SVN, Autosys Job scheduler.

Responsibilities:

  • Discussing / Gathering requirement from Users/Client.
  • Analyzing the requirements and preparing technical docs.
  • Impact analysis
  • Designing data model, coding stored procedures. Triggers, view and Index creation.
  • Assigning work to team members and monitoring them for delivery timelines.
  • Code review and Unit testing.
  • Analyze &resolve production issues.
  • Coordinate with Quality assurance team in preparing/reviewing QA test cases.
  • Resolve UAT issues and coordinate with Business to get Sign-off.
  • Prepare release notes and production support document.
  • Performance tuning of Stored Procs &Perl loaders.
  • Providing technical and logical solution.
  • DB performance/space/locks monitoring, creating group & managing user permission.
  • Client Billing and Leave tracking.
  • Conducting internal & external interview.

Show More Show Less

Description

Project: ASR – Automated Securities Reconciliation

Enhancement & Maintenance and Support for various custody based applications in the Bank’s Brussels Branch. The ASR application (Automated Securities Reconciliation) was developed to allow automated asset statement reconciliation between the Bank’s internal system and the Sub custodians/Third Party Custodians system positions data. It also allows the users to investigate the asset breaks and forward them to operational departments as well as provides the management with operational reports on the state of the reconciliation and ageing of the breaks. Migration of data from OLTP to OLAP using informatica and generated the MIS report from data warehouse.

Environment: Linux, Sybase ASE 15.0.3, Sybase IQ 15.4, Shell scripting, SAP Sybase power designer,

Responsibilities:

  • Discussing / Gathering requirement from Users/Client.
  • Analyzing the requirements, preparing detailed technical design docs & Impact analysis.
  • Designing data model, coding stored procedures, Triggers, View and Index creation.
  • Assigning work to team members and monitoring them for delivery timelines.
  • Loading the data from Sybase ASE (OLTP) to Sybase IQ (OLAP) using ETL Informatica.
  • Query Writing, Debugging, Code review and Unit testing.
  • Coordinate with Quality assurance team in preparing/reviewing QA test cases.
  • DDL & DML Creating and review for production deployment
  • Resolve UAT issues and coordinate with Business to get Sign-off.
  • Prepare release notes and production support document.
  • Performance tuning of Stored Procs, Providing technical and logical solution.
  • Preparing Client Billing, Accruals, Consolidated task list & Leave tracking.
  • Conducting internal & external interview.

Show More Show Less