About Me
− Experience in developing applications using Apache Spark, Scala, Kafka, NiFi & Kylo.
− Proficient Level in Apache Pig and Apache Hive...
− Experience in Mongo DB, Redis, Neo4j, Shell Scripting, and Hadoop Testing.
− Hands on Experience in Implementation and Testing of Big Data applications.
− Importing the data from RDBMS to HDFS using Sqoop.
− Experience loading data to hive partitions and creating Buckets in Hive.
− Knowledge in developing UDF’s for Hive and Pig using Java.
− Good Understanding of Object-Oriented Programming.
− Proficient Experience On batch-scheduling and writing Shell-Scripts.
− Proficient Experience on working with Control-M tool and working with Batch Jobs.
− Hands on Experience in Core Java.
− Solving the Cluster and Environment issues in Hadoop.
− Good knowledge in Retail domain.
Show MoreSkills
Web Development
Data & Analytics
Development Tools
Software Engineering
Database
Programming Language
Others
Operating System
Software Testing
Positions
Portfolio Projects
Company
Real Time Streaming Using Spark
Role
Full-Stack Developer
Contribute
As as software developer
Company
OPENITEM
Role
Full-Stack Developer
Contribute
Analyzing Requirements of client. Understanding the logic of program. Prepare the code in Hadoop environment in pig and hive. Code review maintaining the client standard.
Description
Open Item batch is an item information maintenance system. It holds product information. An item identifies the Division, Category, Store location and Selling price (Retail) of the merchandise. It assigns the Key Stock Number (KSN) which signifies the product purchased by the customer. KSN identifies the style, size, color and flavor. Open Item batch is not an ordering system.
Show More Show LessCompany
Di-Hadoop
Role
Full-Stack Developer
Contribute
Analyzing the LLD (low level document). Understanding the logic of program. Involved in Unit Testing, Integration Testing, and Prod Parallel Testing. Prepare issue log to explain to Clients.
Description
Data for tables is extracted from Teradata. This data is used by other systems such as Local Publishing or Dynamic Pricing. Item and Pricing information must be extracted from Enterprise Data Hub (HADOOP)
Show More Show LessSkills
Big Data Hadoop Distributed File System - (HDFS) Apache Pig Apache Sqoop UNIX Shell ScriptingTools
PutttyCompany
HTC
Role
Full-Stack Developer
Contribute
As a developer