Sayali R.

Sayali R.

2+ years of experience in hadoop spark developement in HCL Technologies

, India

Experience: 2 Years

Sayali

2+ years of experience in hadoop spark developement in HCL Technologies

5142.85 USD / Year

  • Immediate: Available

2 Years

Now you can Instantly Chat with Sayali!

About Me

·         Having around 2+ years of experience in Software Development in Java, Hadoop and Spark Development.

·         Hands on experience BigData Hadoop-MapReduce, Pig, Hive, Hbase, Sqoop, Flume, Hadoop-Yarn, Apache Spa...

·         Good understanding/knowledge of Hadoop Architecture and its components such as HDFS and MapReduce concepts.

·         Familiar with advance spark feature GraphX, Machine Learning.

·         Importing the data from sql server/oracle to HDFS and exporting too using sqoop.

·         Streaming the unstructured data from web sources using Flume.

·         Excellent analytical, problem solving, communication and interpersonal skills with ability to interact with individuals and can work as part of a team            as well as independently.

·         Ability to perform at a high level, meet deadlines, and adaptable to ever changing priorities.

·         Exceptional ability to quickly master new concepts.

Show More

Portfolio Projects

Set top box device analytics

1. This Application is used for the analytics of the set top box logs data with

Company

Set top box device analytics

Role

Software Architect

Contribute

1. Responsible for understanding the scope of the project and requirement gathering. 2. Responsible for collecting stb logs using flume, kafka. 3. Analyze the set top box dataset. 4. Set top box logs

Description

1.      This Application is used for the analytics of the set top box logs data with optimum performance.

2.       Logs data contains both text as well as XML data.

3.      The data is stored in the HDFS in distributed manner over a cluster of nodes, which addresses the problems of scalability high availability and fault tolerance.

4.      The stb logs are collected using flume, kafka and then process using spark rdd and spark SQL.

5.      The architecture solves the problem of limited storage capacity and limited processing capacity, the system is scalable, reliable and fault tolerant.

Show More Show Less

Tools

Eclipse