Srinivasula R.

Srinivasula R.

BI Tech Lead

, India

Experience: 9 Years

Srinivasula

BI Tech Lead

81931.5 USD / Year

  • Immediate: Available

9 Years

Now you can Instantly Chat with Srinivasula!

About Me

Professional Summary:

 

  • Having 8 years of experience in IT industry with Automobile, Healthcare, Retail, and Telecom Domains.
  • Having 2 years of experience on Azure Cloud with Bigdata and AI...
  • Having 2 years of experience on Azure Cloud with Bigdata and AI technologies.
  • Having 5 years of experience on Hadoop Developer/Administrator on Hortonworks platform. 
  • Good experience on Agile, Prototype and DevOps models.
  • Design and implemented  Micro services using APIM, APPGW and Kubernetes.
  • Set up environments in Azure Cloud and enable connection according to business requirements.
  • Passionate with new technologies and explore new technologies and solutions.
  • Set up environments using ARM Templates like HDI, DSVM, SQL DWH, Data Bricks ,AKS, APIM, SQL Server 2019 etc.
  • Design and develop solutions on Bigdata for both Batch and Real time processing.
  • Design architecture and implementation for the projects like Analytics [BI & MI].
  • Set up governance rules on platform and follow best practices for Security.
  • Benchmark systems, analyses platform and products  bottlenecks and propose solutions to eliminate them
  • Clearly articulate pros and cons of various technologies and platforms.
  • Mentoring the team and motivate to the results oriented approach.
  • Help program and project managers in the design, planning and governance of implementing projects
  • Perform detailed analysis of business problems and technical environments and use this in designing the solution
  • Work creatively and analytically in a problem-solving environment
  • Enable network Watcher and implement best governance rules according to Network Architecture
  • Designed Data Ingest Factory model to ingest data with reusable code and flows.
  • Involved in set up the clusters and installed HDF, ELK ,Dasense Servers.
  • Performed testing on platform like integration testing and cross functional testing etc.
  • Designed Data Lake Architecture for enterprise data warehouse at Enterprise level.
  • Good experience on Spark Streaming and Spark SQL implementation.
  • Experience on RDBMS, Hive, HBase data model and design and development.
  • Experience on Kafka real time processing.
  • Developed data ingestion scripts using Sqoop, nifi, Kafka.
  • Developed data processing jobs using pig, spark and MapReduce technologies.
  • Experience on Oozie workflow development and deployment.
  • Experience on DevOps deployment model using VSTS.
  • Involved in Unit Testing phase.
  • Enabled monitoring solutions for azure components like HDI,Web App, SQL etc.
  • Involved in RCA for production defects and resolved.
  • Knowledge on Blockchain , HDI, Blue Data, Interactive Query etc.
  • Experience on Tableau and Power BI visualization tools.
  • Participated Open Data camp at Google.

Show More

Portfolio Projects

Description

Design and Develop analytic solution for Pfizer genomic data.

Developed backend solution for pfizer CRM solution.

Data Import exports .

Developed framework for data transformations.

Show More Show Less

Description

Description:

Daimler Center of Excellence team enables analytics solutions internally across the Daimler. We have set up on premise Hadoopcluster to solve data analytics solutions both batch and real time.

We have set up Analytics and AI platform in Azure cloud and successfully migrated from on premise to Cloud.

Successfully roll out cloud solutions using BYOK enabled encryption across the technologies. Set up data governance rules and platform usage and access. Enabled new cutting edge technologies like Block Chain, H20,Data IKU, event hub technologies.

Connected cars developing the apps and display in the Head Unit of the car using AKS, APIM , APPGW, Spring boot technologies.

Roles & Responsibilities:

  • Design and develop platform architecture and provide solutions to the projects.
  • Set up the azure cloud set up and define the process for Azure Data Lake and Analytics platform.
  • Created the centralized OMS dashboard to monitor difference Azure subscriptions.
  • Set up Key vault and Service principle for the each source and use cases.
  • Experience on deploy HDInsight cluster using ARM templates.
  • Discussed with Project teams and validate project architecture and provide suitable solutions.
  • Set up capacity queue for all projects based on the utilization, dynamic queue creation.
  • Working as Hortonworks coordinator.
  • Resolve customer & project architect clarifications and change architecture based the cluster set up.
  • Installation andconfigured Hadoop, HDF,Dasense clusters.
  • Change the configurations based on the customer needs, without effecting to the other projects.
  • Implemented Capacity Queue Management, Quota Management successfully.
  • Provided L2 and L3 support for critical issues.
  • Install Network SSL certifications both source and Hadoop side.
  • Implemented SMARTSENSE recommendations for cluster performance tuning.
  • Coordinated with Hortonworks and applied required security, critical issues patches to the cluster.
  • Designed migration plan and successfully migrated from HDP 2.4 to HDP 2.5.
  • Developed shell script for edge node clean up and logs clean up.
  • Design best practices for each service in the cluster and published to projects teams.
  • Professionalize the projects and successfully deploy the applications in Hadoop cluster.
  • Designed and developed data ingest factory model to ingest data very easily with minimal set ups.

Show More Show Less

Description

Description:

Apple is maintaining ITunes transactional data in Teradata DW system but as per the performance and memory bottleneck they are replacing using Hadoop Hive. They are extracted data from OLTP system and loaded into Hive in incremental basis and full refresh after applying ETL transformations. Once F2C completed they are created workflows and populates data into semantic layer for Analysis purpose, after that created Oozie workflow and populate the aggregates for the reporting purpose. Apple creating reports for vendor based, royalty based, transaction based etc , reports are visualized using Tableau tool . The reports embedded with R analytics algorithms and published to iPhone, iPad, Laptop users directly.

Responsibilities Undertaken:

  • Understand the SRS and prepared technical specification document.
  • Design and Develop ETL and DWH workflows using Teradata and Hive.
  • Working as offshore coordinator and communicate with Onsite team as well client.
  • Design and develop the Hive DDL according to Data source and DW Schema.
  • According to Data workflow define Incremental and full refresh tables.
  • Design and Develop data model for both Hive and Cassandra databases.
  • Design and Develop ETL workflow in Hive using Oozie workflow.
  • According to business flow change modify Oozie workflow.
  • Developed scripts extract data from Hive to downstream applications.
  • Design and Developed Tableau Reports and implemented R integration.
  • Involved in Resolving Production issues.
  • Developed complex queries for Reports like royalty, Icharts etc.
  • Involved in Unit Testing.

Involved in RFP preparation and Design new architectures for upcoming projects

Show More Show Less

Description

Description: Walmart built an analysis platform for different business domains, including assortment, next gen pricing, retail link, space management, Competitor analysis and Enterprise inventory etc.

Create a Hadoop supported environment with Horton works distribution and data ingestion from different sources and different kinds of data. Create a workflow to process data and visualize advance analytical results and publish reports automatically to decision makers.

Roles and Responsibilities:

  • Understand the Business need plan the Hadoop cluster and developed autosys jobs.
  • Installed and configured multi-nodes Hadoop cluster
  • Involved in installing Hadoop Ecosystems.
  • Configured and installed multiple data sources into Hadoop like Sqoop, NFS, Flume etc.
  • Commission and De commission the data nodes based on the data velocity and business requirement.
  • Involved in Quota Management, User Creation and Permissions.
  • DSR,MSR metrics prepared about cluster status like HDFS space, Jobs submitted etc .
  • Involved in solving production defects.
  • Involved in performance tuning, Cluster monitoring etc.
  • Configured Capacity Scheduler based on the business need.
  • Support for the Dev, QA and Production environments,whenever any issues occurred.
  • Create user profiles based on template and provided the access based on the requirement.
  • Configured Kerberos security.
  • Configured the Ecosystems like Hive, Pig, HBase, Sqoop, Flume, Tableau.
  • Monitor the cluster Health status and kill the long running jobs.
  • Resolving the production tickets using JIRA.
  • Adding and removing cluster nodes, cluster planning, performance tuning, cluster Monitoring, Troubleshooting.
  • Involved in HDFS maintenance and administering IT through Hadoop-Java API.
  • Configured Fair Scheduler to provide service-level agreement for multiple users of a cluster.
  • Maintaining and monitoring cluster, loaded data into the cluster from dynamically generated files using Flume and from relational database management system using SQOOP.
  • Managing nodes on Hadoop cluster connectivity and security.

Show More Show Less