loader image

Data Engineer

Some context on our scale

- We see Millions of daily real-time events representing the living breathing internet

- We track Billions of digital assets

- We store Petabytes of event-state data

RESPONSIBILITIES




    • Selecting and integrating Big Data tools and frameworks

    • Implementing ETL strategies

    • Monitoring performance and driving the discussion on supporting infrastructure

    • Defining data retention policies





YOU ARE




    • Proficient in your understanding of distributed computing principles

    • Proficient with Hadoop v2, MapReduce, HDFS

    • Experienced with building stream-processing systems, using solutions such as Storm or Spark-Streaming, Kafka

    • Experienced with Big Data querying tools, such as Pig, Hive, and Impala

    • Experienced with NoSQL databases, such as HBase, Cassandra, MongoDB

    • Knowledgeable of various ETL techniques and frameworks, such as Flume

    • Experienced with various messaging systems, such as Kafka or RabbitMQ

    • Experienced with Big Data ML toolkits, such as Mahout, SparkML, or H2O

    • Experienced with Lambda Architecture, and understanding of its advantages and drawbacks





OUR STACK




    • Front-End : Angular

    • Back-End: Ruby / Rails, Node

    • Data Stores: Postgres, Mongo, ElasticSearch, Dynamo, InfluxDB, Redis, Memcache, S3

    • Clouds: AWS, Heroku, DigitalOcean





Position

Data Scientist


Must have Skills

  • Hadoop

    Beginner

  • Big Data

    Beginner

Client Payroll

Up to 450 K/Year USD (Annual salary)

Fully Remote

Cancel
Cancel

Active a month ago

Skip

Data Engineer

Some context on our scale

- We see Millions of daily real-time events representing the living breathing internet

- We track Billions of digital assets

- We store Petabytes of event-state data

RESPONSIBILITIES




    • Selecting and integrating Big Data tools and frameworks

    • Implementing ETL strategies

    • Monitoring performance and driving the discussion on supporting infrastructure

    • Defining data retention policies





YOU ARE




    • Proficient in your understanding of distributed computing principles

    • Proficient with Hadoop v2, MapReduce, HDFS

    • Experienced with building stream-processing systems, using solutions such as Storm or Spark-Streaming, Kafka

    • Experienced with Big Data querying tools, such as Pig, Hive, and Impala

    • Experienced with NoSQL databases, such as HBase, Cassandra, MongoDB

    • Knowledgeable of various ETL techniques and frameworks, such as Flume

    • Experienced with various messaging systems, such as Kafka or RabbitMQ

    • Experienced with Big Data ML toolkits, such as Mahout, SparkML, or H2O

    • Experienced with Lambda Architecture, and understanding of its advantages and drawbacks





OUR STACK




    • Front-End : Angular

    • Back-End: Ruby / Rails, Node

    • Data Stores: Postgres, Mongo, ElasticSearch, Dynamo, InfluxDB, Redis, Memcache, S3

    • Clouds: AWS, Heroku, DigitalOcean





Job Type

Client Payroll


Positions

Data Scientist


Must have Skills

  • Hadoop

    Beginner

  • Big Data

    Beginner

Up to 450 K/Year USD (Annual salary)

Longterm (Duration)

Fully Remote

Skip

Michael P

| United States