Now you can Instantly Chat with Ajay!
- 9+ years of extensive experience in architecture, design and development of J2EE,Big Data(Hadoop) application.
- OCJP , SCWCD , NCFM Module 1 Certified.
- Worked in Saba from Dec 2017 till May 2019. ...
Data & Analytics
Saba is the learning product and has been in used by various customer.
As part of learning team I am working on re-write some existing functionality so that it can support concurrency. Primarily Saba used SMF product which is proprietary framework which is used to process large set of data. Currently this product has lots of drawback and facing lots of db locks and concurrency issue .To overcome the drawback I am building a framework so that it can process large set of data without any concurrency.
Created a framework
Which will process large set of data. Creating a java project using which message will get publish on hazelcast.I am creating a shell script through which number of jvm will get trigger.This trigger jvm will listen to hazelcast queue and consume the message .After consumption of message and processing of message,it will returns sql CRUD operation.This sql CRUD operation will get publish on hazelcast queue. DB processor jvm will consume this messages and update the database using jdbc batch.
Show More Show Less
ToolsVisual Studio Online
Matrix is in-house Deutsche bank(DB) framework which is used to calculate exposure for listed directives DB trades. Exposure is calculated in different backtesting ways.
- Forward looking backtesting
- Statistical Historical backtesting
- Hypothetical Backtesting.
I as an individual contributor helps business to perform various backtesting run and work on business change need in backtesting .Created a framework
Which will do valuation for 90+k trades.Created a java project using which trade will get publish on HazelCast queue and spawned n number of jvm using java API and spawned jvm will listen to hazelcast queue and get trade data from hazelcast queue. Spawned jvm will do the pricing of using DBANA API.Result of DBANA API will be stored in DB and final result is shared with business.Show More Show Less
Counter Party trade evaluation is done by Portcalc(PC) Team. PC send this data and I have written a framework which will consume this data and store in Hbase.This data is store in google protobuf format and in single column family. Size of 1 row is close to around 10MB .Volume is around 40+ million.
Once the data is stored I have written a MapReduce job will create another family using same row key and store meta data in same table.After completing this job I have created another java process which will fetch only meta data of the message and store in csv file .Once the csv is generated this data is upload in Oracle DB and final table is consume by Matrix application.Show More Show Less
CMS Repository -- Zensar
CMS Repository -- Zensar
Setup Hadoop and Hbase for storing CMS data in hbase.CMS data is stored in hbase and this result is consumed by CMS system(Alfresco).I have modified alfresco API which was storing the data on NFS,after this implementation data is stored in hbase as well as retrieved from hbase.
Written below scheduler for hbase maintenance.
- Restart hbase master,region server after 3 weeks