Now you can Instantly Chat with VEERA!
About Me
Over 11 years of professional IT experience in Big data ecosystem and Relational Data Base related technologies. Good experience with lifecycle of data processing steps like Connect, Correct, Collect, Compose, Consume, and Control. Strong hand-son ex...
Show MoreSkills
Portfolio Projects
Description
Description
MODE (Mortgage Operational Data Environment) is real time Operational Data Store in various source systems to perform day to day critical business process and reporting.
MIDE (Mortgage integrated Data Environment) stores historical data for a period of years for Business intelligence, strategic decision, Analytics and Reporting in Home mortgage business.
Contributions:
· Involved in Design meetings, estimation and detail FSD/DTD preparation.
- Loading data from UNIX file system to HDFS.
- Move data from upstream database/file systems to Hadoop with the help of Sqoop.
- Code conversion from legacy Mainframe system to SPARK
- Involved in API’s implementation using SPARKas part of data transformation.
- Implementation of business logic using SparkSQL scripts.
- Managing and scheduling Jobs on a Hadoop cluster.
- Load data through DataSource API into spark and perform different Dateset Operations as per business requirement.
- Worked with NOSQL Data model and Queries for Data Movement.
- Participated in Apache Kafka for data ingestion
- Handle the TEXT,ORC, AVRO, Parquet data using Hive, spark SQL and filter the data based on query factor.
- Analyzeddata sets to determine optimal way to aggregate and report on it.
- QA environment setup and support.
- Involved in meeting with US team partners, preparing MOMs and status reports.
Description
Description
CMIE is single source for All Capital markets applicationsand get data from new upstreams. As part of batch process CMIE consumes data from MODE and send data to other Downstream partners. Contributions:
- Written complex Hive and SQL queries for data analysis to meet business requirements.
- Created Hive tables and involved in data loading and writing Hive UDFs
- Involved in defect fixing, code review, UNIT testing and supporting release activities.
- Performance optimizations like partitioning, bucketingon Hive tables
- Create scalable and high-performance SQOOP Scripts for data movement.
- Importing and exporting data into HDFS and Hive using Sqoop.
- Worked with Parquet, ORC, Avro Data Serialization system to work with all data formats.
- Migrated traditional database code to distributed system code (mainly HiveQL)
- Involved in application performance tuning and troubleshooting
- Experienced in managing and reviewing Hadoop log files.
Description
Description
FLEXCUBE is CORE Banking Product and works with multiple banks. I worked with 1) Mashreq Bank, Dubai. 2) RAK Bank, Dubai 3) Warba bank, Kuwait as part of implementations and support activates
Contributions:
- Involved in all modules interface Development and Implementation with PL/SQL.
- Active Participation in the System Study and involved in business analysis and development.
- Interacting with the customers globally and addressing the issues.
- Supporting for disaster recovery drill twice in a year.
- Performance tuning of application programs and Optimization SQL queries to faster response
- Create or modifying Packages, Procedures, Functions, Triggers, Tables, Indexes, and Views etc.
Description
supply chain Global Data Analytics is Big data flat form where we are working with Data teams and business to analyze the stores data from different sources and perform analytics by ingest the data and apply ETL transformation and generate report for INSIGHTS. Worked with Omni channel features implications to focus and analyze data from different channels.
Show More Show Less