loader image

Software Engineer - Data Platforms

Some of the projects that Data Engineers:


  • Using Spark to process terabytes of event and entity data in order to surface usage analytics to our customers

  • Restructuring our data warehouse and data lake to optimize query performance and cost

  • Tracking and using dimension history in our analytics pipelines

  • Improving the security and privacy of user data



About you:


  • Others enjoy working with you because of your positive attitude and technical skills

  • You've been involved in the design of a data pipeline and would be able to design a pipeline from scratch

  • You have expertise in one or more workflow systems, such as Airflow, Luigi, or DBT

  • You are proficient with Apache Spark

  • You have experience with streaming technologies such as Kafka or Kinesis

  • You have strong knowledge of SQL, and are familiar with data warehousing technologies and practices (Amazon Redshift experience is a plus)

  • You have significant experience with two or more programming languages, including Python

  • You collaborate successfully with data scientists, full-stack engineers, and cross-functional stakeholders



What you'll do:


  • Ensure our data pipeline serves the needs of customers in a robust, flexible, and performant way, while protecting the privacy of our users

  • Lead the design of our next-generation data infrastructure

  • Improve the testability of our nightly batch pipeline

  • Help define and prioritize data engineering projects

  • Write design docs and project plans

  • Mentor other engineers



 


Position

Software Architect


Must have Skills

  • Python

    Beginner

  • SQL

    Beginner

  • Scratch

    Beginner

Client Payroll

Up to 450 K/Year USD (Annual salary)

Fully Remote

Cancel
Cancel

Active

Skip

Software Engineer - Data Platforms

Some of the projects that Data Engineers:


  • Using Spark to process terabytes of event and entity data in order to surface usage analytics to our customers

  • Restructuring our data warehouse and data lake to optimize query performance and cost

  • Tracking and using dimension history in our analytics pipelines

  • Improving the security and privacy of user data



About you:


  • Others enjoy working with you because of your positive attitude and technical skills

  • You've been involved in the design of a data pipeline and would be able to design a pipeline from scratch

  • You have expertise in one or more workflow systems, such as Airflow, Luigi, or DBT

  • You are proficient with Apache Spark

  • You have experience with streaming technologies such as Kafka or Kinesis

  • You have strong knowledge of SQL, and are familiar with data warehousing technologies and practices (Amazon Redshift experience is a plus)

  • You have significant experience with two or more programming languages, including Python

  • You collaborate successfully with data scientists, full-stack engineers, and cross-functional stakeholders



What you'll do:


  • Ensure our data pipeline serves the needs of customers in a robust, flexible, and performant way, while protecting the privacy of our users

  • Lead the design of our next-generation data infrastructure

  • Improve the testability of our nightly batch pipeline

  • Help define and prioritize data engineering projects

  • Write design docs and project plans

  • Mentor other engineers



 


Job Type

Client Payroll


Positions

Software Architect


Must have Skills

  • Python

    Beginner

  • SQL

    Beginner

  • Scratch

    Beginner

Up to 450 K/Year USD (Annual salary)

Longterm (Duration)

Fully Remote

Skip

Morgan E

| United States