About Me
- Having over all 8 years of experience in IT Industry.
- Having 2 years of experience in Microsoft Azure Cloud technologies
- Having 6 years of experience in L3/ PLSQL Application support
- Have experience and strong ...
- Have experience and strong programming knowledge in SQL, PL/SQL, SQL Server.
- Experience in writing SQL queries using joins, Analytical functions, Indexes, Constraints.
- Good exposure in Unix Shell Scripting, Python.
- Experienced in Azure Data Factory and preparing CI/CD scripts, DevOps for the deployment.
- Very strong experience in ETL design.
- Hands-on experience in Azure Services – Azure Data Lake Store (ADLS), Logic App, Azure Data Factory (ADF).
- Prepared Project Documentations, Such as Setup Documents, Test Scripts and Functional specification Documents.
- Hands-on experience in Azure Data factory and its Core Concept's like Datasets, Pipelines and Activities, Scheduling and Execution
- Excellent knowledge of ADF building components – Integration Runtime, Linked Services, Data Sets, Pipelines, Activities
Skills
Programming Language
Operating System
Others
Web Development
Positions
Portfolio Projects
Company
Verizon wireless
Role
Data Engineer
Description
- Created Pipeline’s to extract data from on premises source systems to azure cloud data lake storage; extensively worked on copy activities and implemented the copy behaviors such as flatten hierarchy, preserve hierarchy and Merge hierarchy. Implemented Error Handling concept through copy activity.
- Exposure on Azure Data Factory activities such as Lookups, Stored procedures, if condition, for each, Set Variable, Append Variable, Get Metadata, Filter and wait.
- Create dynamic pipeline to handle multiple source extracting to multiple targets; extensively used azure key vaults to configure the connections in linked services.
Ø Configured and implemented the Azure Data Factory Triggers and scheduled the Pipelines; monitored the scheduled Azure Data Factory pipelines and configured the alerts to get notification of failure pipelines.
Ø Implemented delta logic extractions for various sources with the help of control table; implemented the Data Frameworks to handle the deadlocks, recovery, logging the data of pipelines.
- Reviewing individual work on ingesting data into azure data lake and provide feedbacks based on reference architecture, naming conventions, guidelines and best practices
- Developing Spark (Python) notebooks to transform and partition the data and organize files in ADLS
- Working on Azure Databricks to run Spark-Python Notebooks through ADF pipelines.
- Using Databricks utilities called widgets to pass parameters on run time from ADF to Databricks.
- Involved End-End logging frameworks for Data factory pipelines.
- Implemented delta logic extractions for various sources with the help of control table; implemented the Data Frameworks to handle the deadlocks, recovery, logging the data of pipelines.
- Extracted data from different sources such as Flat files, Oracle to load into SQL database.
- Involved in preparation and execution of the unit, integration and end to end test cases.
Company
Digital Market Transformation
Role
Data Engineer
Description
Agility Unite is an integration-driven umbrella that covers the entire marketing task flow from data ingestion to reporting and everything in between. It’s cloud-based marketing platform.
Roles & Responsibilities:
- Created Pipeline’s to extract data from on premises source systems to azure cloud data lake storage; extensively worked on copy activities and implemented the copy behaviors such as flatten hierarchy, preserve hierarchy and Merge hierarchy. Implemented Error Handling concept through copy activity.
- Exposure on Azure Data Factory activities such as Lookups, Stored procedures, if condition, for each, Set Variable, Append Variable, Get Metadata, Filter and wait.
- Create dynamic pipeline to handle multiple source extracting to multiple targets; extensively used azure key vaults to configure the connections in linked services.
Ø Configured and implemented the Azure Data Factory Triggers and scheduled the Pipelines; monitored the scheduled Azure Data Factory pipelines and configured the alerts to get notification of failure pipelines.
Ø Implemented delta logic extractions for various sources with the help of control table; implemented the Data Frameworks to handle the deadlocks, recovery, logging the data of pipelines.
- Reviewing individual work on ingesting data into azure data lake and provide feedbacks based on reference architecture, naming conventions, guidelines and best practices
- Developing Spark (Python) notebooks to transform and partition the data and organize files in ADLS
- Working on Azure Databricks to run Spark-Python Notebooks through ADF pipelines.
- Using Databricks utilities called widgets to pass parameters on run time from ADF to Databricks.
- Involved End-End logging frameworks for Data factory pipelines
- Implemented delta logic extractions for various sources with the help of control table; implemented the Data Frameworks to handle the deadlocks, recovery, logging the data of pipelines.
- Extracted data from different sources such as Flat files, Oracle to load into SQL database.
- Involved in preparation and execution of the unit, integration and end to end test cases.
Company
Clinical Research
Description
Responsibilities
- As per the requirement write scripts and configured as report in Apex dashboard to monitor proactive monitoring of production issues
- Drill through Java code block to understand the client response code
- Run the shell scripts, Grant Hue access, Study folder creation in Unix as per business request
- Monitor files availability in Unix Directory, Clean the data from study folder
- Perform tool deployments in Unix, Analyze disk usage, clean unnecessary data of users
- Monitoring OEM to understand database performance issues.
- Monitoring the Informatica workflow progress for any latency in the dependent job triggering
- Identifying the cause of Workflow failure by logging in execption.log, node.log, catalena.out files
- Identifying application bugs in application releases and raising defects or Jira to Dev team.
- Developing scripts, Testing the changes in Lower Environment to handle the production issues and following with Development, QA team to get it approved. Create Knowledge Base Document.
- Raised Change Requests to move new changes/ Amendments into the system
- Worked on ActiveBatch to schedule/Enable/Disable jobs as per the business requirement
- Worked on Application/Database deployments in Unix/Windows Environment
- Create/Update system manual and Knowledge Base articles as per the new changes
- Investigate and resolve production issues as per agreed SLA. If necessary, escalate to the different stake holders based on the analysis
- Co-ordinating with Web logic team for Application performance issues.
- Resolved the issues reported by the users related to the data, XML issues
Skills
PL/SQL SQL Server 2005 Unix ShellCompany
investment Banking
Description
Responsibilities
- Analyzed and resolved the live incidents as per agreed SLA. If necessary, escalate to the different dealer groups based on the analysis.
- Strictly maintaining and following the SLA.
- Performed RCA of the Bridge cases and prepared detailed report of the same.
- Identify the production incidents/issues and fix them by making the necessary changes.
- Day to day troubleshooting Production issues and maintains the applications on Oracle.
- Creating and scheduling the jobs via Control-M to transfer the files, to run the reports.
- Perform sanity testing to ensure application is stable and functionality is working as per expectation as post release support.
- Implemented the standard nomenclature of Oracle, along with the standards of client
- Resolved the issues reported by the users related to the data, business issues.
- Raised many GCM’s to promote the changes to the production.
- Also involved in Database production releases.
- Coordinates/ facilitates transition (planning, sign-off, team meetings, and escalations).
- Handling failures during the Batch Run, Find the errors from the Unix box
and fixing it along with known error data base.
Show More Show Less