Prashant G.

Prashant G.

Total experience of 5 years in IT industry which includes 2.8 years of experience in Hadoop Admin

Bangalore , India

Experience: 5 Years

Prashant

Bangalore , India

Total experience of 5 years in IT industry which includes 2.8 years of experience in Hadoop Admin

42306 USD / Year

  • Notice Period: Days

5 Years

Now you can Instantly Chat with Prashant!

About Me

 Overall 5+ years of IT experience, as a Hadoop administrator and Pentaho BI Tool and MySQL.

 Having 2.8 years of hands on experience as a Hadoop Administrator in MapR and Hortonworks

 Hands on experience on ecosystem comp...

 Hands on experience on ecosystem components Hive, Sqoop, Pig, HBase, Oozie, Zookeeper and MapReduce.     

 Hands on experience in installation, configuration, supporting and managing Hadoop Clusters.

 Decommissioning and commissioning the Node on running Hadoop cluster.

 Expertise in HDFS Architecture and Cluster Concepts.

 Installation of various Hadoop Ecosystems and Hadoop Daemons.

 Rebalancing the Hadoop Cluster.

 Hands on experience on Hadoop Security in Ranger and Kerberos.

 Hands on experience on data transfer/migration across the clusters in Hortonworks.

 Hands on experience on mirroring a volume in MapR.

 Hands on experience on hive and hbase data migration.

 Expertise in Cluster Installation for POC, Dev, Staging and Production environment

 Troubleshooting, diagnosing, tuning and solving the Hadoop issues.

 Worked on importing and exporting data from MySQL databases into HDFS and Hive using Sqoop

 Involved in Hive table creation, partitioning and bucketing of tables

 Written Hive queries (HQL) for data analysis to meet the business requirements

 Sound knowledge of Relational Database Management System (RDBMS)

 Hands on experience in Reporting and Dashboard tool like Pentaho BI Tool

 Good Knowledge in Amazon AWS concepts like EC2 web services which provides fast and efficient processing

 Adequate knowledge and working experience in agile methodologies

 Ability to play a key role in the team and communicates across the team.

Show More

Portfolio Projects

Description

Cluster maintenance, commissioning & decommissioning data nodes.
Installation and configuration of MapR/Hortonwork Hadoop cluster , Design & develop MapR DR setup, and manage data on MapR/ Hortonwork cluster
End-to-end performance tuning of MapR clusters and Hadoop Map/Reduce routines against very large data sets, working with MapR cluster along with MapR-Table(creation, import, export, scan, list)
Managing & monitoring cluster.

Performed data balancing on clusters

Applications PROD Support as roaster and Hadoop Platform Support.

Working on Name Node high availability customizing zookeeper services.


Managing quotas to Mapr File System.

Recovering from node failure and troubleshooting common hadoop cluster issues.

Responsible for Mapr File system data rebalancing.

Responsible for performing the backup and Restoration of data from MFS to SAN and Tapes as per Retention Policy.

Show More Show Less

Hadoop Admin

Description


Responsibilities:
? Design, develop, and manage data on MapR/Hadoop cluster, Addition of node on MapR cluster End-to-end performance tuning of Hadoop clusters and Hadoop Map/Reduce routines against very large datasets
? Hands on experience in installing, configuring and using ecosystem components like Hadoop MapReduce, HDFS, MapR-FS HBase, ZooKeeper, Oozie, Hive, Sqoop, Pig, Flume.
? Monitor Hadoop cluster job performance and capacity planning, manage nodes on Hadoop cluster Installation, Hadoop Administration/development as well as Pig, Hive, HBase, MapR, Flume, Sqoop, implementing bash shell scripts to automate the services and processes on servers.
? Administration of Linux Servers on Centos, Ubuntu
? Managing application servers on different zones like Production & Staging.
? Actively monitoring idle threads, JVM, CPU utilization, connection pools & Troubleshooting.
? Hadoop cluster connectivity and security, Implement new Hadoop hardware infrastructure
? HDFS support and

Show More Show Less

Description

This project involves tracking network fault complaints from the customers. And also to examine information flows formally and informally within a customer complaint handling process, and to identify possible improvement areas to strengthen the network signal.
Data in the MYSQL database is transformed loaded into HDFS. Later this data is analysed using Hive that exposes data in HDFS in a Distributed Query enabled platform. Sqoop is used to extract data from internal

Show More Show Less