loader image

Data Engineer, Remote (Boston, Los Angeles, or Bay Area pref

Come craft the next generation of our patented machine learning deployment platform. As a Data Engineer, you will be working on a cutting edge platform for deploying machine learning in enterprises that provides unprecedented insights into images, video, and documents across multiple industry verticals.

We are a small remote first highly collaborative team spread across North America. We feel that remote workers can concentrate on hard problems, and we solve hard problems. We get together in person twice a year and weekly via video to celebrate our successes. We value diversity and seek those who are self-less and scrappy.

We need a savvy Data Engineer to expand and optimize our data and data pipeline architecture, as well as optimizing data flow and collection for our cross-functional teams. If you are an experienced data pipeline builder and data wrangler who enjoys optimizing systems and building them from the ground up then this job is for you. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture, working with incredibly smart motivated people all over the world, and helping grow a Google backed startup.

Responsibilities

You will support our software developers, database architects, and data scientists and will ensure optimal data delivery architecture is consistent throughout all ongoing projects. You must be self-directed and comfortable supporting the needs of multiple teams, systems, and products. Specific responsibilities include:


  • Work with chief architect to develop and maintain optimal data pipeline architecture.

  • Develop public-facing APIs with Kotlin and Python.

  • Utilize machine learning libraries to analyze visual data.

  • Collaborate with UX and Product on data visualizations and search.

  • Build ElasticSearch plugins to handle ML extracted data types.

  • Work directly with customers to solve difficult data-driven problems.

  • Clearly document code and APIs so team can build on your knowledge.



Requirements


  • 3-5 years experience with ElasticSearch

  • Experience with Java/Kotlin, Python, Spring Boot

  • Experience with at least one major cloud provider. Google Cloud Platform, Amazon Web Services, Azure

  • Docker, Kubernetes

  • Jupyter Notebook



Position

Data Scientist


Must have Skills

  • Kubernetes

    Beginner

  • Docker

    Beginner

  • AWS

    Beginner

  • SpringBoot

    Beginner

  • Python

    Beginner

  • Java (All Versions)

    Beginner

  • ElasticSearch

    Beginner

Client Payroll

Up to 450 K/Year USD (Annual salary)

Fully Remote

Cancel
Cancel

Active

Skip

Data Engineer, Remote (Boston, Los Angeles, or Bay Area pref

Come craft the next generation of our patented machine learning deployment platform. As a Data Engineer, you will be working on a cutting edge platform for deploying machine learning in enterprises that provides unprecedented insights into images, video, and documents across multiple industry verticals.

We are a small remote first highly collaborative team spread across North America. We feel that remote workers can concentrate on hard problems, and we solve hard problems. We get together in person twice a year and weekly via video to celebrate our successes. We value diversity and seek those who are self-less and scrappy.

We need a savvy Data Engineer to expand and optimize our data and data pipeline architecture, as well as optimizing data flow and collection for our cross-functional teams. If you are an experienced data pipeline builder and data wrangler who enjoys optimizing systems and building them from the ground up then this job is for you. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture, working with incredibly smart motivated people all over the world, and helping grow a Google backed startup.

Responsibilities

You will support our software developers, database architects, and data scientists and will ensure optimal data delivery architecture is consistent throughout all ongoing projects. You must be self-directed and comfortable supporting the needs of multiple teams, systems, and products. Specific responsibilities include:


  • Work with chief architect to develop and maintain optimal data pipeline architecture.

  • Develop public-facing APIs with Kotlin and Python.

  • Utilize machine learning libraries to analyze visual data.

  • Collaborate with UX and Product on data visualizations and search.

  • Build ElasticSearch plugins to handle ML extracted data types.

  • Work directly with customers to solve difficult data-driven problems.

  • Clearly document code and APIs so team can build on your knowledge.



Requirements


  • 3-5 years experience with ElasticSearch

  • Experience with Java/Kotlin, Python, Spring Boot

  • Experience with at least one major cloud provider. Google Cloud Platform, Amazon Web Services, Azure

  • Docker, Kubernetes

  • Jupyter Notebook



Job Type

Client Payroll


Positions

Data Scientist


Must have Skills

  • Kubernetes

    Beginner

  • Docker

    Beginner

  • AWS

    Beginner

  • SpringBoot

    Beginner

  • Python

    Beginner

  • Java (All Versions)

    Beginner

  • ElasticSearch

    Beginner

Up to 450 K/Year USD (Annual salary)

Longterm (Duration)

Fully Remote

Skip

Jessica C

| United States