Sathya S.

Sathya S.

Big data Technical Architect

Bangalore , India

Experience: 12 Years

Sathya

Bangalore , India

Big data Technical Architect

40036.8 USD / Year

  • Immediate: Available

12 Years

Now you can Instantly Chat with Sathya!

About Me

Around 13 years of experience in IT and 6.5 years in developing Hadoop applications using Big Data, Hadoop& Hadoop Ecosystems (Hive, Impala, Sqoop, Apache Kudu) and Spark.Played key role in solutions panel for determining the best set of tools needed...

Show More

Portfolio Projects

Description

PRA Health Sciences is a global healthcare intelligence partner, consistently ranked among the top CROswhich help develop life-saving and life-improving drugs with our comprehensive clinical development services,including data management, statistical analysis, clinical trial management, medical writing, and regulatory anddrug development consulting. Building integrated data platform on Hadoop, integrating multiple data sources fora major infomediary in pharma and health care domain managing 10 Petabytes of active data assets. Datamigration and integration strategy from existing data repositories. Data model design on Hadoop and using cloudTechnologies. Patient Data Mart is one of the major Project for PRA as we are getting complete patient detailsfrom the different vendors and these details we are storing in Cloud S3 for data access to End Users. We areusing these PDM data to generate Patient Transactional data and doing some analytics using AWS Athena andsending those report to clients.

Show More Show Less

Description

Symphony Health Solutions is a leading provider of high-value data, analytics, technology solutions andactionable insights for healthcare and life sciences manufacturers, payers and providers. The company helpsclients drive revenue growth and commercial effectiveness, while adapting to the transformation of thehealthcare ecosystem, by integrating a broad set of patient, prescriber, Payer and clinical data together withprimary and secondary health research, analytics and consulting. Symphony delivers a comprehensiveperspective on the real dynamics that drive business in the healthcare and life sciences markets. Buildingintegrated data platform on Hadoop, integrating multiple data sources for a major infomediary in pharma andhealth care domain managing 10 Petabytes of active data assets. Data migration and integration strategy fromexisting data repositories. Data model design on Hadoop. Strategy to migrate existing PL/SQL, ETLprocesses/transformations on Hadoop.Patient process is one of the major module for SHS as we are getting complete patient details from the differentvendors and we will be converting the details into tokens using synoma engine which is gives a universal patientidentifier ID called SynomaID and we do match pass with the existing history data to find out whether its anexisting patient or not, if its not we will be producing new person real gid for the patient and these data will besent the Oracle and from there Data analysis team will be used to analyze the data. This complete module was inOracle, Informatica and we have migrated to Hadoop using Pyspark, Hive, Impala and Sqoop.

Show More Show Less

Description

Involved in Hadoop Framework design until Implementation.

Understanding the business functionality & Analysis of business requirements.

Conducts root cause analysis on systems and database issues throughout the life cycle of the project.

Implemented Adhoc Patient Process in Pyspark using RDD’s and Data Frames.

Written HQL’s from existing PL/SQL procedures for batch processing.

Migrated Patient Matching Batch process from PL/SQL to Hadoop Using Impala and apache Kudu.

Written python programs to read files from HDFS and perform some aggregations and move back to

other HDFS path.

Developed Hive and Impala UDF’s which are required for the process.

Worked on Migrating existing Oracle Historical data to Hadoop using Sqoop and Informatica Work flows.

Worked on Performance tuning of the Hadoop process on different tools such as Hive, Impala and for

storing Archival data.

Worked on different compression formats and decided best approach to store Archival and as well as

batch processing Data.

Migrated Oracle Data to AWS S3 using Spark.

Performing some analytical Queries from AWS Athena from CLI which is integrated in Shell Scripting.

Worked on POC’s to check the connectivity from Hadoop to ETL tools (Pentaho and Informatica BDE).

Provided best practices to improve the Cluster Performance.

Show More Show Less

Description

  • Involved in Hadoop Framework design until Implementation.
  • Understanding the business functionality & Analysis of business requirements.
  • Design and migrate the Mainframe module to Hadoop using Impala, Sqoop and Shell script.
  • Conducts root cause analysis on systems and database issues throughout the life cycle of the project.
  • Written HQL’s from existing PL/SQL procedures for batch processing.
  • Developed Hive and Impala UDF’s which are required for the process.
  • Worked on Performance tuning of the Hadoop process on different tools such as Hive, Impala and for storing Archival data.
  • Provided best practices to improve the Cluster Performance.

Show More Show Less

Description

  • Involved in Hadoop Framework design
  • Understanding the business functionality & Analysis of business requirements.
  • Extensively used pig script, complex type, grouping, flattening etc.
  • Loading data to Hive tables and writing queries to process.
  • Administrating the 23 nodes Hadoop cluster.
  • Loading datasets to HDFS and writing Map Reduce jobs to mine the data.
  • Loading of data to HBase using pig and Hive.
  • Responsible for loading, extracting and validation of client data.
  • Installed/Configured/Maintained cloudera Hadoop clusters for application development and Hadoop tools like Hive, Pig, HBase, Zookeeper and Flume.
  • Wrote the shell scripts to monitor the health check of Hadoop daemon services and respond accordingly to any warning or failure conditions.
  • Implemented NameNode backup using NFS. This was done for High availability.
  • Use of sqoop to import and export data from HDFS to RDBMS and vice-versa.
  • Supported in setting up QA environment and updating configurations for implementing scripts with Pig and Sqoop.

Show More Show Less

Description

  • Involvement in development and implementation of Configuration Management standards, procedures and guidelines
  • Worked on nearly 6 Projects end to end in Oracle BRM including Upgrade and Agile Projects.
  • Technical, Operational and Functional application maintenance which involves writing PL/SQL scripts to resolve issues and to extract data.
  • Providing Estimates to the Projects from Environment support and maintaining all the environments up-to-date and assigning them to the respective project as per requirement.
  • Automating the process using Shell Script and perl.
  • Creating the Clear case view, Branches and merging the code.
  • Creating new Oracle BRM environments on UNIX servers.
  • Solving incidents and fulfilling the service requests and honoring all SLAs.
  • Deploying the code changes to the UNIX environment from clear case and also applying Database changes.
  • Monitoring, Status reporting of errors and taking proactive steps to avoid errors.
  • Participating in and reacting to, quarterly Project Health Reviews and Submitting Project Metrics.

Show More Show Less

Description

GMOT is an online trading system in bankofamerica which it consists of different applications and are categorisedas Gloss and Non gloss systems. The Gloss system consists of BTM, CTM, EBAR, GSF, CAPS, CBAR and Non glosssystems are TESS, Oasys and reporting System like Recon and URSA.EBAR is a gloss system , configuredspecifically for stock and cash accounting using double entry book keeping accounting principles. It is BAMLsbook and records system for cash equity, debt and equity derivatives. It is the primary source of controlledinformation for communicating with outside world and for the governance and control of the business. Thisincludes business management, corporate reporting, financial accounting, Statements, stock and nostroreconciliation. It makes trade dated and value dated postings.BTM and CTM are the transaction managers for bond and clearance. It receives debt trades from front office,enriches these trades most important with depot info and sends the trades to GSF for settlement and EBAR tocreate the ledgers and to CSW to allow the business manage fails. GSF is the interface between the middle officesystem and the outside world. It takes trades information from TESS, CTM and BTM.GSF uses this tradeinformation to send SWIFT to the agent banks. URSA reporting system is a read only web application.

Show More Show Less

Description

- VOMS is one of project in the GM Account which mainly deals with Vehicle Operation Management System.VOMS Project mainly consists of following applications:OM (Order Management)OG (Order Generation)SPECS (Specification of Vehicle)SPA (Supplier Planner and Allocation)Order Management Application primarily used by dealers to help them work with orders and manage theirinventory. Order Management performs the following business units: Manage Orders, Disapprove Orders, TagOrders from Bulletin Board, Flag Pattern for Trade, Mass Order Update, Change Orders, Send Approved Orders toPOMS, Manage Inventory. This application mainly launches a restriction on Start and End Date of Change ofOrders.Order Generation application mainly deals with how the Orders are generated from dealers that event history,shipment, etc...SPECS application mainly deals with the specification of the Vehicle that is, Color of the Car, type of Engine, A/C isrequired or not etc. Especially this SPECS application mainly interface with all the application due to itsSPECIFICATION of the Vehicles,SPA (Supplier Planner Allocation) application mainly deals with how the plan is done for supplying the Vehiclefrom GM to Customers that is Dealers. For this month how much Cars are available in the Plant Side and howmany dealers has ordered.

Show More Show Less