loader image

Software Engineer, Data Pipeline

Note: We prefer this position to be located in Ann Arbor, Michigan (and will help with relocation!) but are open to hiring remote for the right candidate.

We're looking for a Data Engineer to help grow our data processing pipeline! We perform billions of network handshakes and DNS lookups per hour as well as consume external data feeds to maintain an up-to-date view of all hosts and networks on the Internet You will help build and maintain the processing pipeline that consumes inbound data feeds to produce a consistent view of Internet hosts. We leverage the Google Cloud Platform (including Google Dataflow, Bigtable, and BigQuery) for processing data as well as build our own analysis tools. Your responsibilities will include exploring new ways of processing and analyzing incoming network data, and building out our data processing pipeline.

The types of things youll do:

Work with Apache Beam, Airflow, Google Dataflow, BigTable, and BigQuery to build the next generation of the Our Company data processing pipeline

Design automated solutions for building, testing, monitoring, and deploying ETL pipelines in a continuous integration environment

Work with application engineers to develop internal APIs and data solutions to power Our Company product offerings

Coordinate with backend engineering team to analyze data in order to improve the quality and consistency of our data

Desired Qualifications:

Bachelor's degree in Computer Science or related field, or equivalent experience

3+ years of full-time, industry experience

Deep understanding of relational as well as NoSQL data stores (e.g., Snowflake, Redshift, BigTable, Spark) and approaches

Hands-on experience building data processing pipelines (e.g, in Storm, Beam)

Proficiency with object-oriented and/or functional languages (e.g. Java, Scala, Go)

Strong scripting ability in Python/Ruby/BASH

 


Position

Full-Stack Developer


Must have Skills

  • NoSQL

    Beginner

  • Java (All Versions)

    Beginner

  • Python

    Beginner

  • Redshift

    Beginner

Client Payroll

Up to 450 K/Year USD (Annual salary)

Fully Remote

Cancel
Cancel

Active

Skip

Software Engineer, Data Pipeline

Note: We prefer this position to be located in Ann Arbor, Michigan (and will help with relocation!) but are open to hiring remote for the right candidate.

We're looking for a Data Engineer to help grow our data processing pipeline! We perform billions of network handshakes and DNS lookups per hour as well as consume external data feeds to maintain an up-to-date view of all hosts and networks on the Internet You will help build and maintain the processing pipeline that consumes inbound data feeds to produce a consistent view of Internet hosts. We leverage the Google Cloud Platform (including Google Dataflow, Bigtable, and BigQuery) for processing data as well as build our own analysis tools. Your responsibilities will include exploring new ways of processing and analyzing incoming network data, and building out our data processing pipeline.

The types of things youll do:

Work with Apache Beam, Airflow, Google Dataflow, BigTable, and BigQuery to build the next generation of the Our Company data processing pipeline

Design automated solutions for building, testing, monitoring, and deploying ETL pipelines in a continuous integration environment

Work with application engineers to develop internal APIs and data solutions to power Our Company product offerings

Coordinate with backend engineering team to analyze data in order to improve the quality and consistency of our data

Desired Qualifications:

Bachelor's degree in Computer Science or related field, or equivalent experience

3+ years of full-time, industry experience

Deep understanding of relational as well as NoSQL data stores (e.g., Snowflake, Redshift, BigTable, Spark) and approaches

Hands-on experience building data processing pipelines (e.g, in Storm, Beam)

Proficiency with object-oriented and/or functional languages (e.g. Java, Scala, Go)

Strong scripting ability in Python/Ruby/BASH

 


Job Type

Client Payroll


Positions

Full-Stack Developer


Must have Skills

  • NoSQL

    Beginner

  • Java (All Versions)

    Beginner

  • Python

    Beginner

  • Redshift

    Beginner

Up to 450 K/Year USD (Annual salary)

Longterm (Duration)

Fully Remote

Skip

Jose N

| United States