Ankesh J.

Ankesh J.

Spark/Big Data Developer with 6 Years Experience

Pune , India

Experience: 6 Years

Ankesh

Pune , India

Spark/Big Data Developer with 6 Years Experience

32000 USD / Year

  • Notice Period: 90 Days

6 Years

Now you can Instantly Chat with Ankesh!

About Me

−         Experience in developing applications using Apache Spark, Scala, Kafka, NiFi & Kylo.

−         Proficient Level in Apache Pig and Apache Hive...

−         Experience in Mongo DB, Redis, Neo4j, Shell Scripting, and Hadoop Testing.

−         Hands on Experience in Implementation and Testing of Big Data applications.

−         Importing the data from RDBMS to HDFS using Sqoop.

−         Experience loading data to hive partitions and creating Buckets in Hive.

−         Knowledge in developing UDF’s for Hive and Pig using Java.

−         Good Understanding of Object-Oriented Programming.

−         Proficient Experience On batch-scheduling and writing Shell-Scripts.

−         Proficient Experience on working with Control-M tool and working with Batch Jobs.

−         Hands on Experience in Core Java.

−         Solving the Cluster and Environment issues in Hadoop.

−         Good knowledge in Retail domain.

Show More

Skills

Portfolio Projects

Real Time Streaming Using Spark

Company

Real Time Streaming Using Spark

Role

Full-Stack Developer

Contribute

As as software developer

Description

Working on processing of real data json data stream using Spark and Kafka Integration. We store intermediate data in redis and final output we were stored in Mongo DB.

And We updated data in Mongo DB using real data stream.

Show More Show Less

OPENITEM

OPENITEM - Convert Openitem Mainframe job into Hadoop

Company

OPENITEM

Role

Full-Stack Developer

Contribute

Analyzing Requirements of client. Understanding the logic of program. Prepare the code in Hadoop environment in pig and hive. Code review maintaining the client standard.

Description

Open Item batch is an item information maintenance system. It holds product information. An item identifies the Division, Category, Store location and Selling price (Retail) of the merchandise. It assigns the Key Stock Number (KSN) which signifies the product purchased by the customer. KSN identifies the style, size, color and flavor. Open Item batch is not an ordering system.

Show More Show Less

Tools

Puttty SVN

Di-Hadoop

Di-Hadoop

Company

Di-Hadoop

Role

Full-Stack Developer

Contribute

Analyzing the LLD (low level document). Understanding the logic of program. Involved in Unit Testing, Integration Testing, and Prod Parallel Testing. Prepare issue log to explain to Clients.

Description

Data for tables is extracted from Teradata. This data is used by other systems such as Local Publishing or Dynamic Pricing. Item and Pricing information must be extracted from Enterprise Data Hub (HADOOP)

Show More Show Less

Tools

Puttty

HTC

HTC - Design Nifi FLow and Schedule on Kyo

Company

HTC

Role

Full-Stack Developer

Contribute

As a developer

Description

The main purpose of this project is to process data coming from multiple sources and apply transformation and store the data in Hive. We are developing data pipelines and schedule them on Kylo.

 

Technology Used - Spark, Scala, Nifi, Kylo, Azure

Show More Show Less

Tools

Git Puttty

HTC

Company

HTC

Role

Backend Developer

Description

Processed data coming from multiple sources and apply transformation and store the data in Hive. We are developing data pipelines and schedule them on Kylo.

Show More Show Less

Tools

kylo