Sr. BigData Engineer | Online Jobs | Optimhire

Sr. BigData Engineer

  • 5+ years experience in a Data Engineering role with an emphasis on managing Data Warehouse
  • Strong skills in Python, Git, Docker, SQL, Airflow, ETL pipelines
  • Familiarity with at least one of Hive, Presto, Snowflake, AWS Redshift, BigQuery
  • AWS Cloud experience is a must while Azure or GCP is there then good to have!
  • A passion for programming and solving problems with code
  • A bachelor's degree in Computer Science/Software Engineering or equivalent industry experience
  • A love for technology, and an insatiable curiosity for new tools to tackle real problems
  • The developer must have sound knowledge of Apache Spark and Python programming.
  • Deep experience in developing data processing tasks using pySpark such as reading data from external sources, merge data, perform data enrichment and load into target data destinations.
  • Experience in deployment and operationalizing the code is added advantage
  • Design and build high performing and scalable data processing systems to support multiple internal and 3rd party data pipelines
  • Write Python/Spark jobs for data transformation, aggregation, ETL, and Machine Learning.
  • Tuning pyspark jobs and performance optimization
  • Responsible for Design, Coding, Unit Testing, and other SDLC activities in a big data environment
  • Requirement gathering and understanding, Analyze and convert functional requirements into concrete technical tasks and able to provide reasonable effort estimates
  • Work proactively, independently, and with global teams to address project requirements and articulate issues/challenges with enough lead time to address project delivery risks
  • Exposure to Elastic Search, Solr is a plus
  • Exposure to NoSql Databases Cassandra, MongoDB
  • Exposure to Serverless computing
  • Must have a minimum of 3 years of hands-on experience in Spark/python with an overall development experience of 4-8 years in RDBMS systems
  • Experience with integration of data from multiple data sources (RDBMS, API)
  • In-Depth knowledge of python and Spark components, the ecosystem is a must
  • Strong knowledge in distributed systems and a solid understanding of Big Data Systems in the Hadoop Ecosystem.
  • Experience in developing and deploying large-scale distributed applications. al skills
  • Experience in Microservices CI/CD (Jenkins, Nexus, etc.) would be a preferred

Job Type


Must have Skills

  • MuleSoft SDK

19 - 36 K/Year USD (Annual salary)

Longterm (Duration)

Fully Remote

Manish N