Now you can Instantly Chat with Nitish!
About Me
Technologies experienced in: JAVA, Scala
Big Data components experienced in: Deep knowledge of Big Data Ecosystem, Spark, Spark Streaming, Kafka, Kafka Streaming, HBase, Hive, Zookeeper, YARN, MapReduce, Kafka, Docker, SQOOP, MongoDB, JDB... Kafka, Kafka Streaming, HBase, Hive, Zookeeper, YARN, MapReduce, Kafka, Docker, SQOOP, MongoDB, JDBC, JSON, XML, Google Protocol Buffers etc.
Hadoop distributions experienced in: Cloudera, Hortonworks.
AWS Cloud experience in: ● Fit AWS solutions inside a Big Data ecosystem ● Leverage Apache Hadoop in the context of Amazon EMR ● Identify the components of an Amazon EMR cluster, then launch and configure an Amazon EMR cluster ● Use common programming frameworks available for Amazon EMR. ● Improve the ease of use of Amazon EMR by using Hadoop User Experience (Hue) ● Use in-memory analytics with Apache Spark on Amazon EMR ● Use S3 for storage. ● Identify the benefits of using Amazon Kinesis for near real-time Big Data processing ● Leverage Amazon Redshift to efficiently store and analyze data.
Technical Expertise: Languages – Java SE. Tools – GIT, Maven, Putty, Perforce. Servers –Apache Tomcat Operating System – Windows, UNIX IDE – IntelliJ
Professional Experience:
Clairvoyant Experience – 06/19
Project: ODM (Batch Processing) & Near Real Time (NRT)
Developed a data lake for a client (of financial domain) using Spark with JAVA and Scala, cloud we used is AWS, and for streaming we do spark streaming and kafka as well. To keep data secure and for server authentication, we use Kerberos.
Stacks used: Java 7&8, Data Structure, AWS, Spark, Spark Streaming, Kafka, Kafka Stream, HBase, Hive.
Amdocs Experience: Software Developer - 11/16 - 06/19
Project: Amdocs Data Hub (ADH).
Project was to support and enhance the existing amdocs product (ADH), which takes data from source (Oracle, CSV files etc.) and loads that data to the Hadoop environment. Role was to do enhancement in ADH, analyze the code and fix the bugs. Built some new pipelines from scratch like Kafka Collector, File Collector, CSV collector etc.
Stacks Used: Java 7&8, Data Structure, AWS, Spark, Spark Streaming, Kafka, Kafka Stream, HBase, Hive, Zookeeper, YARN, HQL, SQL.
Intra-Amdocs Inter Unit project: Updation Tool
Description: Development of tool is in Micro Services.
Technologies & Tools: Java, Spring Boot, Rest Services, Couch Base, Core Java, Putty,
Detailed Achievements: Code in spring boot with Rest API’s. Integrate business logic with Couch Base DB.
Skills
-
- 4 Years
Advanced
-
-
-
- 4 Years
Advanced
-
- 4 Years
Advanced
-
-
- 5 Years
Advanced
-
-
-
-
-
- 4 Years
Advanced
-
-
-
-
- 4 Years
Advanced
-
-
- 3 Years
Advanced
-
-
- 4 Years
Advanced
-
- 4 Years
Advanced
-
-
-
- 4 Years
Advanced
-
- 1 Years
Intermediate
-
- 5 Years
Advanced
-
-
-
- 2 Years
Intermediate
-
- 5 Years
Advanced
-
-
- 4 Years
Advanced
-
-
- 4 Years
Advanced
-
-
-
-
- 4 Years
Expert
-
-
- 2 Years
Intermediate
-
-
- 5 Years
Advanced
-
- 5 Years
Advanced
-
- 6 Years
Advanced
-
- 6 Years
Advanced
-
-
- 4 Years
Advanced
-
-
- 5 Years
Intermediate
-
- 6 Years
Advanced
-
-
-
- 6 Years
Beginner
-
-
- 4 Years
Advanced
-
-
-
- 1 Years
Intermediate
-
-
- 4 Years
Advanced
-
- 4 Years
Advanced
-
- 5 Years
Intermediate
-
- 6 Years
Advanced
-
- 4 Years
Expert
-
- 7 Years
Advanced
-
- 4 Years
Advanced
-
- 3 Years
Intermediate
-
- 2 Years
Intermediate
-
- 3 Years
Intermediate
-
- 5 Years
Advanced
-
- 5 Years
Advanced
-
- 4 Years
Intermediate
-
- 2 Years
Beginner
-
- 4 Years
Advanced
-
- 4 Years
Advanced
-
- 4 Years
Advanced
-
- 4 Years
Advanced
-
- 2 Years
Intermediate
-
- 4 Years
Advanced
-
- 1 Years
Intermediate
-
- 6 Years
Advanced
-
- 6 Years
Advanced
-
- 5 Years
Expert
-
- 4 Years
Advanced
-
-
- 1 Years
Beginner
-
- 5 Years
Advanced
-
- 5 Years
Advanced
-
- 5 Years
Expert
-
- 6 Years
Advanced
-
- 7 Years
Advanced
-
- 5 Years
Advanced
-
- 5 Years
Expert
-
- 5 Years
Advanced
-
- 6 Years
Advanced
-
- 5 Years
Advanced
-
-
- 5 Years
Advanced
-
- 5 Years
Advanced
-
- 1 Years
Intermediate
-
- 2 Years
Intermediate
-
- 1 Years
Beginner
-
- 3 Years
Advanced
-
- 5 Years
Advanced
-
- 1 Years
Beginner
-
- 5 Years
Advanced
-
- 3 Years
Intermediate
-
- 2 Years
Intermediate
-
- 5 Years
Advanced
-
- 2 Years
Intermediate
-
-
- 4 Years
Intermediate
-
- 5 Years
Advanced
-
- 2 Years
Beginner
-
- 5 Years
Advanced
-
- 2 Years
Intermediate
-
-
- 5 Years
Expert
-
-
- 5 Years
Advanced
-
-
- 1 Years
Beginner
-
- 4 Years
Advanced
-
- 1 Years
Beginner
-
- 4 Years
Advanced
-
- 1 Years
Advanced
-
- 4 Years
Advanced
-
- 2 Years
Intermediate
-
- 1 Years
Advanced
-
- 4 Years
Intermediate
-
- 4 Years
Intermediate
-
- 4 Years
Intermediate
-
- 4 Years
Advanced
-
- 5 Years
Advanced
-
- 5 Years
Advanced
-
- 2 Years
Intermediate
-
- 4 Years
Advanced
-
- 4 Years
Advanced
-
-
- 1 Years
Intermediate
-
- 5 Years
Advanced
-
- 7 Years
Advanced
-
- 5 Years
Advanced
-
- 7 Years
Expert
-
- 1 Years
Intermediate
-
- 1 Years
Intermediate
-
- 6 Years
Advanced
-
- 6 Years
Expert
-
- 3 Years
Beginner
-
- 1 Years
Beginner
-
- 5 Years
Advanced
-
- 2 Years
Intermediate
-
- 4 Years
Advanced
-
- 6 Years
Advanced
-
-
- 5 Years
Advanced
-
-
-
- 5 Years
Advanced
-
- 5 Years
Expert
-
-
- 6 Years
Advanced
-
-
- 6 Years
Advanced
-
-
-
-
- 4 Years
Advanced
-
- 2 Years
Advanced
-
- 7 Years
Advanced
-
- 4 Years
Advanced
-
- 4 Years
Intermediate
-
Portfolio Projects
Description
The Citi Data – Big Data & Analytics Engineering Organization is actively recruiting for a Big Data Engineering Analyst. Candidates with prior hands-on experience of the Hadoop ecosystem will be preferred. The candidate must have Java experience and will contribute to the architecture, engineering and custom development required of Hadoop offering within Citi Big Data Platform
Responsibilities: ● Involved in requirement analysis, design, coding and implementations. ● Processed data into HDFS by developing solutions, analyzed the data using Spark, Spark Streaming and produced summary results from Hadoop. ● Used Sqoop to import the data from RDBMS into the Hadoop Ecosystem. ● Involved in loading and transforming sets of structured, semi structured and unstructured data and analyzed them by running hive queries and spark sql. ● Worked on various File Formats - AVRO, ORC, Parquet, Seq Files, text files, csv, xml etc. ● Managed and reviewed log files.
Show More Show LessDescription
NRT is a project which allows the flow of data E2E, in near to real time. Worked on development modules like Hbase Collector, Oracle Collector, Kafka collector and Implementation part as well. Worked for PayPal Payment Data Engineering Team. The purpose of this project is to capture all data streams from different sources and stored into our secure Cloud stack based on technologies including Hadoop, Spark and Kafka. Also we build new processing pipelines over transaction records, user profiles, files and communication data raging from emails, instant messages. Moreover using Spark to enrich and transform data to internal data models powering search, data visualization and analytics.
Responsibilities: ● Designed and implemented scalable infrastructure and platform for large amounts of data ingestion, aggregation, integration and analytics in Hadoop, including MapReduce, Spark, Spark Streaming, Kafka, HDFS, Hive. ● Written Sqoop scripts to import, export and update the data between HDFS/hive and relational databases. ● Developed Utils, for importing data from various sources like HDFS/HBase into SparkRDD. ● Processed the BA’s requirement through Spark DataFrame functions E2E. ● Designed and created the data models for customer data using HBase query API’s. ● Created Hive tables, then loaded and analyzed data using hive queries. ● Utilized Kafka to capture and process realtime and near real time time streaming data. ● Using Spark SQL and Spark Streaming for data Streaming analysis. ● Developed Spark Code in Java and Scala to perform data transformation, creating DataFrames and running Spark SQL and Spark Streaming application in Scala. ● Developed Custom Partitioner in Kafka. ● To Avoid, Add salting mechanism in HBase and Spark programs. ● For Authorization, implemented Kerberos.
Jun2019 - Feb2020)
Show More Show LessDescription
Amdocs Data Hub is an end to end platform enabling Communication Service providers to develop big data solutions including data integration, data storage, and reporting. It processes and stores that data into a unified data store based on Amdocs Logical Data Model. It can consolidate and compact such data, and then analyze and report business insights based on it. Worked on development modules like Golden Gate Collector, Kafka collector and Implementation of entities.
Designation: Sr. Software Engineer.
Responsibilities: Product Experience: o Development of different features for product ADH in -
Languages: JAVA, Scala.
Big Data Components: In depth of Big Data Ecosystem, Spark, Spark Streaming, Kafka, Kafka Streams, HBase, Hive, Zookeeper, YARN, MapReduce, HUE.
Hadoop Distribution: Cloudera, Hortonworks.
Cloud Experience: Amazon Web Services (AWS) - EC2, Kinesis, EMR, Amazon RedShift, S3. o Defects fixing with high debugging skills
On-Site Delivery Experience (Interaction with Customers): o Assess and understand the customer requirement and then provide the required Estimations of effort and resources. o Contribute in architecture & detailed design and development of varied Big Data Solution. o Incorporate continuous integration in the delivery line. o Responsible for designing, coding and testing solutions deliverable to clients. o Conduct unit testing and troubleshooting. o Apply application of appropriate development tools. o Set priorities for projects including equipment and resources to ensure timely delivery of agreed projects. o Assess and communicate risk in relation to solution delivery. o Monitor and challenge KPIs for vendor performance and identify gaps and areas of service improvement. o Ensure simplification and repeatability of dev code. o Foster an innovative culture and approach across the ETL dev team. o Apply the relevant security and risk management protocols as required. o Maintain solution documentation as appropriate . o Collaborate with teams to integrate systems. o Provide any third level support in post-production as required.
Show More Show LessVerifications
-
Phone Verified
Preferred Language
-
English - Fluent
Available Timezones
BROWSE SIMILAR DEVELOPER
-
Richard K
Senior Software Engineer
-
James N
Experienced .NET Software Engineer
-
Menashe B
Mule ESB / Anypoint Platform / API-led Architecture / Integration Expert
-
Tommy L
Senior System Software Engineer
-
Steven T
Have coded almost everything from firmware through apps, dev to valid to customer suppport
-
Mikhail B
Seniors Micro-Services consultant
-
Eric A
Embedded Engineer
-
Thamil S
Project Manager
-
Terry L
SAS Consultant
-
Jeffrey L
Cisco Network and Voice Engineer with heavy Linux / C / PHP / Python programming background