loader image

Remote AWS Big Data Engineer

We are seeking a Remote AWS Big Data Engineer to join our team of 250+ professionals across 40 countries. The successful applicant will work with a powerhouse team to develop a complex big data application for the medical industry.

Responsibilities:


  • Advising on the design of system architecture

  • Configuring AWS EMR to ensure optimal performance of jobs written in JAVA

  • Optimizing Apache Hadoop and Spark for performance

  • Maintaining Hadoop clusters

  • Troubleshooting Apache Spark running on multi-node clusters and distributed data processing framework

  • Working with highly sensitive and private data



Requirements:


  • 5+ years of professional DevOps experience

  • Significant experience with Apache Spark streaming and batch framework

  • Experience managing large-scale data streaming pipelines with Hadoop

  • Experience in system architecture design

  • Knowledge of service oriented architecture and data standards (e.g. JSON)

  • Exceptional time management skills

  • Intermediate-level spoken and written English

  • Bachelors degree or higher, Masters degree preferred



Compensation: Depending on skills and experience.  Employees are paid monthly via wire transfer.

This is a full-time, home-based position.


Position

DevOps Engineer


Must have Skills

  • JSON

    Beginner

  • Hadoop

    Beginner

  • Apache Spark

    Beginner

  • DevOps

    Beginner

Client Payroll

Up to 450 K/Year USD (Annual salary)

Fully Remote

Cancel
Cancel

Active

Skip

Remote AWS Big Data Engineer

We are seeking a Remote AWS Big Data Engineer to join our team of 250+ professionals across 40 countries. The successful applicant will work with a powerhouse team to develop a complex big data application for the medical industry.

Responsibilities:


  • Advising on the design of system architecture

  • Configuring AWS EMR to ensure optimal performance of jobs written in JAVA

  • Optimizing Apache Hadoop and Spark for performance

  • Maintaining Hadoop clusters

  • Troubleshooting Apache Spark running on multi-node clusters and distributed data processing framework

  • Working with highly sensitive and private data



Requirements:


  • 5+ years of professional DevOps experience

  • Significant experience with Apache Spark streaming and batch framework

  • Experience managing large-scale data streaming pipelines with Hadoop

  • Experience in system architecture design

  • Knowledge of service oriented architecture and data standards (e.g. JSON)

  • Exceptional time management skills

  • Intermediate-level spoken and written English

  • Bachelors degree or higher, Masters degree preferred



Compensation: Depending on skills and experience.  Employees are paid monthly via wire transfer.

This is a full-time, home-based position.


Job Type

Client Payroll


Positions

DevOps Engineer


Must have Skills

  • JSON

    Beginner

  • Hadoop

    Beginner

  • Apache Spark

    Beginner

  • DevOps

    Beginner

Up to 450 K/Year USD (Annual salary)

Longterm (Duration)

Fully Remote

Skip

Rita M

| United States