RAJARAM S.

Senior Software Engineer

Commitment
0/ 5
Competency
0/ 5
Reliability
0/ 5
  • Overall Experience: 10 Years  
  • Adobe Flash:
  • Agile Software Development:
  • AJAX:
  • Algorithm Development:
  • Amazon Relational Database Service:

RAJARAM S. 

Senior Software Engineer

Commitment
0/5
Competency
0/5
Reliability
0/5

Time zones ready to work

  • Eastern Daylight [UTC -4]
  • New Delhi [UTC +5]
  • Dubai [UTC +4]
  • China (West) [UTC +6]
  • Singapore [UTC +7]
  • Hong Kong (East China) [UTC +8]

Willing to travel to client location: Yes  

About Me 

Aspiring for challenging assignments in the field of IT to leverage my professional and personal skills for organizational growth and reach higher echelons A value driven
Aspiring for challenging assignments in the field of IT to leverage my professional and personal skills for organizational growth and reach higher echelons A value driven and highly competent Senior Software Engineer at Mindtree with 4.11 years of experience in Big Data; strong technical acumen with analytical skills; demonstrated ability to manage multiple priorities and developing solutions in a rapidly changing environment; Ethical, loyal and maintain a high degree of confidentiality
Show More

Interview Videos

Signup to see videos

Risk-Free Trial, Pay Only If Satisfied.

Portfolios

Google Analytics: Extracted Marketing data from Google Analytics which helps to measure Ads ROI a

Role:

Have wrote a Scala wrapper code to extract json response and ingest the data to Azure ADLS

Have flattened the complex jsons to CSV using Databricks Spark to make the data compatible with all downstream jobs

Have created a data modeling for Google Analytics for multiple reports and

Have wrote a Scala wrapper code to extract json response and ingest the data to Azure ADLS

Have flattened the complex jsons to CSV using Databricks Spark to make the data compatible with all downstream jobs

Have created a data modeling for Google Analytics for multiple reports and implemented data lake in Azure ADLS

Show More

Skills:

Tools: Azure NotebookIntelliJ IDEAVisual Studio (Win)Azure

Facebook Data Analysis: Extracted Marketing data from Facebook using facebook API call which help

Role:

Facebook Data Analysis:

Extracted Marketing data from Facebook using facebook API call which helps to measure Ads ROI as well as track key metrics of Flash, video, and social networking sites and applications

Facebook Data Analysis:

Extracted Marketing data from Facebook using facebook API call which helps to measure Ads ROI as well as track key metrics of Flash, video, and social networking sites and applications

Skills: PythonApache Hive

Tools: Azure NotebookVisual Studio (Win)IntelliJ IDEA

UDL

Role:

Google Analytics: Extracted Marketing data from Google Analytics which helps to measure Ads ROI as well as track key metrics of Flash, video, and social networking sites and applications. Responsibilities: Have wrote a Scala wrapper code to extract response and ingest the data to Azure ADLS Have fl
Google Analytics: Extracted Marketing data from Google Analytics which helps to measure Ads ROI as well as track key metrics of Flash, video, and social networking sites and applications. Responsibilities: Have wrote a Scala wrapper code to extract response and ingest the data to Azure ADLS Have flattened the complex s to CSV using Databricks Spark to make the data compatible with all downstream jobs Have created a data modeling for Google Analytics for multiple reports and implemented data lake in Azure ADLS Coordinated with business customers to gather business requirements, and also interacted with other technical peers to derive Technical requirements. Converting hard and complex techniques as well as functional requirements into the detailed designs. Completely responsible for creating data model for storing & processing data and for generating & reporting alerts. This model is being implemented as standard across all regions as a global solution. Used Spark to create the structured data from large amount of unstructured data from various sources. Developed Spark code by using Scala and Spark-SQL for faster processing and testing and performed complex HiveQL queries on Hive tables. Wrote spark UDFs to incorporate complex business logic into Spark in the process of performing high level data analysis. Used Azure Data factory to create pipeline to manage interdependent Spark jobs and to automate several types of Spark jobs. Facebook Data Analysis: Extracted Marketing data from Facebook using facebook API call which helps to measure Ads ROI as well as track key metrics of Flash, video, and social networking sites and applications. Responsibilities: Review and verify the technical document in Low level design. Responsible for deriving, transforming and delivering Facebook data to the Business Development Layer Handling Facebook Data Fetch for Unilever, Loading and Transforming data to Azure Data Lake in structured format for developing dashboards. Accountable for team deliveries in Agile Environment using agile methodologies for Facebook data ingestion team. Responsible to analyze Data Management Report i.e. the Client Requirement for Global, India data and Coming up with the optimized technique by developing code Scala and Spark skills in Azure DataBricks Environment. Used Data Frame API in Scala for converting the distributed collection of data organized into named columns, developing predictive analytic using Apache Spark Scala APIs. Proficient at using Spark APIs to cleanse, explore, aggregate and transform the trade transaction data. Exploring with the Spark for improving the performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, Pair RDDs Used Spark SQL to process the huge amount of structured data. Expertise in writing spark UDFs to incorporate complex business logic into hive queries in the process of performing high level data analysis. Created partitioned tables in Hive. Used Hive to analyze the partitioned and bucketed data and compute various metrics for reporting.
Show More

Skills: Apache ScalaPythonApache SparkApache HiveApache SqoopAzureData WarehouseJiraGITGoogle AnalyticsSQLAzure Data FactoryME APIHadoop

Tools:

espJen

Role:

espJen is the core data warehouse for celgene which stores different types of drugs available in a market, its manufacturer details, price of the drugs, unit sold on monthly basis. espJen can provide various reports like – Number of unit sold for a manufacturer product for a particular geography a
espJen is the core data warehouse for celgene which stores different types of drugs available in a market, its manufacturer details, price of the drugs, unit sold on monthly basis. espJen can provide various reports like – Number of unit sold for a manufacturer product for a particular geography area and period. Number of manufacturer present in a particular market. Number of product/drug available for a disease. Total/profit of a product in a market per period. Responsibilities: Review and verify the technical document in Low level design. Co-coordinating and conducting knowledge transfer session for other team members. Proficient at using Spark APIs to cleanse, explore, aggregate, and transform the trade data. Developed SQOOP scripts to migrate data from Oracle to Big data Environment. Converted Hive queries into Spark transformations using Spark RDDs. Exploring with the Spark improving the performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, Pair RDDs. Conducting RCA to find out data issues and resolve production Issues. Used Agile/scrum Environment and used Jenkins Continuous Integration and Deployment. Developing predictive analytic using Apache Spark Scala APIs. Expertise in writing Hive and spark UDFs to incorporate complex business logic into hive queries in the process of performing high level data analysis. Worked extensively with Dimensional modeling, Data migration, Data cleansing, Data profiling, and ETL Processes features for data warehouses.
Show More

Skills: Apache SparkApache ScalaApache SqoopApache HiveAzureOracleSQLJenkinsData MigrationData CleansingData Profiling

Tools:

espJen is the core data warehouse for celgene which stores different types of drugs available in

Role:

espJen is the core data warehouse for celgene which stores different types of drugs available in a market, its manufacturer details, price of the drugs, unit sold on monthly basis.

espJen can provide various reports like –

  1. Num

espJen is the core data warehouse for celgene which stores different types of drugs available in a market, its manufacturer details, price of the drugs, unit sold on monthly basis.

espJen can provide various reports like –

  1. Number of unit sold for a manufacturer product for a particular geography area and period.
  2. Number of manufacturer present in a particular market.
Show More

Skills: Apache SparkApache ScalaApache SqoopAzure DataBricksAzure DataFactory

Tools: IntelliJ IDEAAzureAzure NotebookVisual Studio (Win)

Employment

module lead

2018/11 -

Skills: PythonApache SparkAzure DataBricksAzure DataFactoryApache Sqoop

Your Role and Responsibilities:

  1. Developed Spark code by using Scala and Spark-SQL for faster processing and testing and performed complex HiveQL queries on Hive tables.
  2. Wrote spark UDF’s to incorporate complex business logic into Spark in the process of performing high level da
  1. Developed Spark code by using Scala and Spark-SQL for faster processing and testing and performed complex HiveQL queries on Hive tables.
  2. Wrote spark UDF’s to incorporate complex business logic into Spark in the process of performing high level data analysis.
  3. Used Azure Data factory to create pipeline to manage interdependent Spark jobs and to automate several types of Spark jobs.
Show More

Big Data Engineer

2018/10 -

Skills: Apache ScalaApache SparkHadoopSQLJSONParquetComma Separated Values - (CSV)XMLAWSAzureData WarehousePythonApache SqoopJiraGIT

Your Role and Responsibilities:

Experience with Analysis, Design and Development in Scala, Apache Spark and Hadoop ecosystem. Proficient in analyzing and translating business requirements to technical requirements and low level design document. Proficient at using Spark Sql/APIs to cleanse, explore, aggregate, transform and store
Experience with Analysis, Design and Development in Scala, Apache Spark and Hadoop ecosystem. Proficient in analyzing and translating business requirements to technical requirements and low level design document. Proficient at using Spark Sql/APIs to cleanse, explore, aggregate, transform and store data. Experience in loading data into Spark schema RDDs and querying them using Spark-SQL Experience in analyzing large volume of data using Hive Query Language and also assisted with performance tuning. Experienced with different file formats like Json, ORC, Parquet,CSV, Text files, Sequence files, XML. Hands-on experience with systems-building languages such as Scala. Experience in building Data-pipe lines using Azure data factory Technologies. Efficient in working with Hive data warehouse tool-creating tables, data distribution by implementing partitioning and bucketing, writing and optimizing the HiveQL queries. Created Hive External tables to stage data and then move the data from Staging to main tables Utilize AWS services with focus on big data /analytics / enterprise data warehouse and business intelligence solutions to ensure optimal, scalability, flexibility, availability, performance, and to provide meaningful and valuable information for better decision-making. Quality conscious, Good Team player, hard working and enthusiastic in learning new things with target oriented approach following timelines. Good ability to quickly grasp new concepts and technologies possessing problem solving skills as well as hard working and above all a good team player.
Show More

software developer

2013/09 - 2017/10

Skills: Apache SparkApache ScalaApache SqoopApache HiveAzureOracleSQLJenkinsData MigrationData CleansingData Profiling

Your Role and Responsibilities:

espJen is the core data warehouse for celgene which stores different types of drugs available in a market, its manufacturer details, price of the drugs, unit sold on monthly basis. espJen can provide various reports like – Number of unit sold for a manufacturer product for a particular geography a
espJen is the core data warehouse for celgene which stores different types of drugs available in a market, its manufacturer details, price of the drugs, unit sold on monthly basis. espJen can provide various reports like – Number of unit sold for a manufacturer product for a particular geography area and period. Number of manufacturer present in a particular market. Number of product/drug available for a disease. Total/profit of a product in a market per period. Review and verify the technical document in Low level design. Co-coordinating and conducting knowledge transfer session for other team members. Proficient at using Spark APIs to cleanse, explore, aggregate, and transform the trade data. Developed SQOOP scripts to migrate data from Oracle to Big data Environment. Converted Hive queries into Spark transformations using Spark RDDs. Exploring with the Spark improving the performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, Pair RDDs. Conducting RCA to find out data issues and resolve production Issues. Used Agile/scrum Environment and used Jenkins Continuous Integration and Deployment. Developing predictive analytic using Apache Spark Scala APIs. Expertise in writing Hive and spark UDFs to incorporate complex business logic into hive queries in the process of performing high level data analysis. Worked extensively with Dimensional modeling, Data migration, Data cleansing, Data profiling, and ETL Processes features for data warehouses.
Show More

Education

2003 - 2006





Skills

Adobe Flash Agile Software Development AJAX Algorithm Development Amazon Relational Database Service

Tools

Visual Studio (Win) Eclipse IntelliJ IDEA azure devops

Preferred Languages

English - Fluent Hindi -