Bigdata Hadoop Developer
Bigdata Hadoop Developer
Willing to travel to client location: Yes
About Me
Risk-Free Trial, Pay Only If Satisfied.
Role:
Description: The objective of this project is creating Hive External tables from Master tables and preprocessing and storing it back to Hive External tables and further will be used for data scientists to train their models. Res
Skills: PySparkSBT
Tools: IntelliJ IDEA
Role:
Description: The objective of this project is to build a Realtime Streaming Data pipeline and further loaded into MongoDB which will be used for Data Scientist to Predict their Models. Responsibilities:
Skills: SplunkLogstashApache-KafkaMongoDB
Tools: IntelliJ IDEA
Role:
Description: The objective of the project is to develop spark applications to convert informatica workflows and further laoded into Hive orc tables and which will be used for downstream systems. End-to-End flow is scheduled using Talend
Skills: Apache SparkShell ScriptingLinuxHadoop Distributed File System - (HDFS)Apache Hive
Tools: SCALA IDE
Role:
Skills: Apache SparkSBTApache HiveHadoop Distributed File System - (HDFS)LinuxSplunkMongoDBApache-KafkaShell ScriptingLogstashTesting Framework
Tools:
Role:
Skills: Apache SparkSBTApache HiveHadoop Distributed File System - (HDFS)LinuxPySpark
Tools:
//= $value['employer_name']; ?>
Skills: SplunkLogstashApache-KafkaApache SparkMongoDBPySparkHadoop Distributed File System - (HDFS)Apache HiveLinuxAmbariOozie
Your Role and Responsibilities:
HADOOP ECOSYSTEM : HDFS, Hive,Sqoop
APACHE SPARK : Spark Core, SparkSQL, Spark Streaming
PROGRAMMING
//= $value['employer_name']; ?>
Skills: T SQLSQLStored Procedures
Your Role and Responsibilities: