Now you can Instantly Chat with RAJARAM!
About Me
Aspiring for challenging assignments in the field of IT to leverage my professional and personal skills for organizational growth and reach higher echelons A value driven and highly competent Senior Software Engineer at Mindtree with 4.11 years of ex...elons A value driven and highly competent Senior Software Engineer at Mindtree with 4.11 years of experience in Big Data; strong technical acumen with analytical skills; demonstrated ability to manage multiple priorities and developing solutions in a rapidly changing environment; Ethical, loyal and maintain a high degree of confidentiality
Show MoreSkills
-
-
-
-
-
-
-
-
- 5 Years
Advanced
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- 5 Years
Advanced
-
-
-
-
-
-
-
-
-
- 3 Years
Advanced
-
-
-
- 5 Years
Advanced
-
- 5 Years
Advanced
-
-
-
-
-
-
- 4 Years
Intermediate
-
-
- 4 Years
Advanced
-
- 4 Years
Advanced
-
- 1 Years
Beginner
-
-
-
- 5 Years
Advanced
-
- 5 Years
Advanced
-
- 2 Years
Intermediate
-
-
- 2 Years
Beginner
-
-
- 3 Years
Intermediate
-
-
-
-
- 6 Years
Advanced
-
- 5 Years
Advanced
-
- 1 Years
Beginner
-
-
-
-
-
-
-
-
-
-
-
- 3 Years
Intermediate
-
- 3 Years
Advanced
-
-
-
Portfolio Projects
Description
Have wrote a Scala wrapper code to extract json response and ingest the data to Azure ADLS
Have flattened the complex jsons to CSV using Databricks Spark to make the data compatible with all downstream jobs
Have created a data modeling for Google Analytics for multiple reports and implemented data lake in Azure ADLS
Show More Show LessDescription
Google Analytics: Extracted Marketing data from Google Analytics which helps to measure Ads ROI as well as track key metrics of Flash, video, and social networking sites and applications. Responsibilities: Have wrote a Scala wrapper code to extract response and ingest the data to Azure ADLS Have flattened the complex s to CSV using Databricks Spark to make the data compatible with all downstream jobs Have created a data modeling for Google Analytics for multiple reports and implemented data lake in Azure ADLS Coordinated with business customers to gather business requirements, and also interacted with other technical peers to derive Technical requirements. Converting hard and complex techniques as well as functional requirements into the detailed designs. Completely responsible for creating data model for storing & processing data and for generating & reporting alerts. This model is being implemented as standard across all regions as a global solution. Used Spark to create the structured data from large amount of unstructured data from various sources. Developed Spark code by using Scala and Spark-SQL for faster processing and testing and performed complex HiveQL queries on Hive tables. Wrote spark UDFs to incorporate complex business logic into Spark in the process of performing high level data analysis. Used Azure Data factory to create pipeline to manage interdependent Spark jobs and to automate several types of Spark jobs. Facebook Data Analysis: Extracted Marketing data from Facebook using facebook API call which helps to measure Ads ROI as well as track key metrics of Flash, video, and social networking sites and applications. Responsibilities: Review and verify the technical document in Low level design. Responsible for deriving, transforming and delivering Facebook data to the Business Development Layer Handling Facebook Data Fetch for Unilever, Loading and Transforming data to Azure Data Lake in structured format for developing dashboards. Accountable for team deliveries in Agile Environment using agile methodologies for Facebook data ingestion team. Responsible to analyze Data Management Report i.e. the Client Requirement for Global, India data and Coming up with the optimized technique by developing code Scala and Spark skills in Azure DataBricks Environment. Used Data Frame API in Scala for converting the distributed collection of data organized into named columns, developing predictive analytic using Apache Spark Scala APIs. Proficient at using Spark APIs to cleanse, explore, aggregate and transform the trade transaction data. Exploring with the Spark for improving the performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, Pair RDDs Used Spark SQL to process the huge amount of structured data. Expertise in writing spark UDFs to incorporate complex business logic into hive queries in the process of performing high level data analysis. Created partitioned tables in Hive. Used Hive to analyze the partitioned and bucketed data and compute various metrics for reporting.
Show More Show LessDescription
espJen is the core data warehouse for celgene which stores different types of drugs available in a market, its manufacturer details, price of the drugs, unit sold on monthly basis. espJen can provide various reports like – Number of unit sold for a manufacturer product for a particular geography area and period. Number of manufacturer present in a particular market. Number of product/drug available for a disease. Total/profit of a product in a market per period. Responsibilities: Review and verify the technical document in Low level design. Co-coordinating and conducting knowledge transfer session for other team members. Proficient at using Spark APIs to cleanse, explore, aggregate, and transform the trade data. Developed SQOOP scripts to migrate data from Oracle to Big data Environment. Converted Hive queries into Spark transformations using Spark RDDs. Exploring with the Spark improving the performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, Pair RDDs. Conducting RCA to find out data issues and resolve production Issues. Used Agile/scrum Environment and used Jenkins Continuous Integration and Deployment. Developing predictive analytic using Apache Spark Scala APIs. Expertise in writing Hive and spark UDFs to incorporate complex business logic into hive queries in the process of performing high level data analysis. Worked extensively with Dimensional modeling, Data migration, Data cleansing, Data profiling, and ETL Processes features for data warehouses.
Show More Show LessDescription
espJen is the core data warehouse for celgene which stores different types of drugs available in a market, its manufacturer details, price of the drugs, unit sold on monthly basis.
espJen can provide various reports like –
- Number of unit sold for a manufacturer product for a particular geography area and period.
- Number of manufacturer present in a particular market.