loader image

Senior Software Engineer, Data

You will:


  • Take end-to-end responsibility for designing, building, and maintenance of batch and real-time data pipelines and data solutions.

  • Choose and manage tools and technologies to build and support a robust data infrastructure.

  • Be responsible for the data modelling and schema design of our data warehouse.

  • Identify bottlenecks and improve the performance of all our data pipelines.

  • Ensure all necessary monitoring and backup solutions are in place.

  • Be one of the primary points of contact within the organization for data pipelines, ETL processes, and complex queries required by the product or for business intelligence purposes.



Requirements


  • Strong experience using distributed streaming platforms such as Apache Kafka or Amazon Kinesis.

  • Strong experience developing ETLs and data pipelines using Apache Spark 2.0.0 or higher.

  • Strong experience with data modelling for data warehousing use-cases, design patterns, and building highly scalable and secure solutions.

  • Experience with Python, PostgreSQL, and Amazon Redshift.

  • Experience with Hadoop, plus distributions like EMR, Cloudera or Hortonworks.

  • Experience with workflow management tools such as Airflow, Luigi or similar.

  • Experience with end-to-end testing of data pipelines.

  • A desire to work in a respectful, transparent, and transparent work environment, following our company values, culture and ways of working

  • Will submit to a background check, confidentially processed by our third-party partner.



Plus points:


  • Knowledge of data visualization and reporting tools such as Tableau

  • Experience with Elasticsearch and Redis

  • Experience with Segment (CDI)

  • Experience with distributed architectures and microservices.

  • Experience with cloud services (e.g. AWS, GCP, Azure), Containerization, configuration management and infrastructure automation.



Position

Full-Stack Developer


Must have Skills

  • Hadoop

    Beginner

  • Redshift

    Beginner

  • PostgreSQL

    Beginner

  • Python

    Beginner

  • Apache-Kafka

    Beginner

Client Payroll

Up to 450 K/Year USD (Annual salary)

Fully Remote

Cancel
Cancel

Active

Skip

Senior Software Engineer, Data

You will:


  • Take end-to-end responsibility for designing, building, and maintenance of batch and real-time data pipelines and data solutions.

  • Choose and manage tools and technologies to build and support a robust data infrastructure.

  • Be responsible for the data modelling and schema design of our data warehouse.

  • Identify bottlenecks and improve the performance of all our data pipelines.

  • Ensure all necessary monitoring and backup solutions are in place.

  • Be one of the primary points of contact within the organization for data pipelines, ETL processes, and complex queries required by the product or for business intelligence purposes.



Requirements


  • Strong experience using distributed streaming platforms such as Apache Kafka or Amazon Kinesis.

  • Strong experience developing ETLs and data pipelines using Apache Spark 2.0.0 or higher.

  • Strong experience with data modelling for data warehousing use-cases, design patterns, and building highly scalable and secure solutions.

  • Experience with Python, PostgreSQL, and Amazon Redshift.

  • Experience with Hadoop, plus distributions like EMR, Cloudera or Hortonworks.

  • Experience with workflow management tools such as Airflow, Luigi or similar.

  • Experience with end-to-end testing of data pipelines.

  • A desire to work in a respectful, transparent, and transparent work environment, following our company values, culture and ways of working

  • Will submit to a background check, confidentially processed by our third-party partner.



Plus points:


  • Knowledge of data visualization and reporting tools such as Tableau

  • Experience with Elasticsearch and Redis

  • Experience with Segment (CDI)

  • Experience with distributed architectures and microservices.

  • Experience with cloud services (e.g. AWS, GCP, Azure), Containerization, configuration management and infrastructure automation.



Job Type

Client Payroll


Positions

Full-Stack Developer


Must have Skills

  • Hadoop

    Beginner

  • Redshift

    Beginner

  • PostgreSQL

    Beginner

  • Python

    Beginner

  • Apache-Kafka

    Beginner

Up to 450 K/Year USD (Annual salary)

Longterm (Duration)

Fully Remote

Skip

Sara B

| United Kingdom