loader image

Cloud Data Engineer (m/f/d)

Responsibilities:


  • Store, process, analyze and explore/visualize data on the Google Cloud Platform

  • Work on data migrations and transformational projects and with customers to design large-scale data processing systems

  • Develop data pipelines optimized for scaling, and troubleshoot potential platform challenges.

  • Design, write and support extract, transform, and load (ETL) processes to automate data collection and manage ML and reporting processes/pipelines including data quality and monitoring.

  • Combining existing data sources (e.g. BigQuery) with Machine Learning API

  • Collaborate with Data Scientists, Web Analysts, and business stakeholders to ensure our data infrastructure meets constantly evolving requirements.



Requirements:


  • Bachelor's degree in Computer Science/Mathematics  or equivalent practical experience.

  • Experience in one or more programming languages such as Python,  Java or C++, and working with datasets using SQL.

  • Experience designing databases, defining and implementing system requirements for data collection.

  • Experience writing, maintaining and monitoring both streaming and batch ETLs operating on structured and unstructured sources.

  • Experience with  data processing software (e.g., Hadoop, Spark, Pig, Hive) and data processing algorithms

  • Familiarity with Machine Learning libraries (such as TensorFlow, Keras) or statistical analysis using Python, R.

  • Fluent English language knowledge, German is an advantage



Position

Backend Developer


Must have Skills

  • Hadoop

    Beginner

  • SQL

    Beginner

  • C++

    Beginner

  • Java (All Versions)

    Beginner

  • Python

    Beginner

Client Payroll

Up to 450 K/Year USD (Annual salary)

Fully Remote

Cancel
Cancel

Active

Skip

Cloud Data Engineer (m/f/d)

Responsibilities:


  • Store, process, analyze and explore/visualize data on the Google Cloud Platform

  • Work on data migrations and transformational projects and with customers to design large-scale data processing systems

  • Develop data pipelines optimized for scaling, and troubleshoot potential platform challenges.

  • Design, write and support extract, transform, and load (ETL) processes to automate data collection and manage ML and reporting processes/pipelines including data quality and monitoring.

  • Combining existing data sources (e.g. BigQuery) with Machine Learning API

  • Collaborate with Data Scientists, Web Analysts, and business stakeholders to ensure our data infrastructure meets constantly evolving requirements.



Requirements:


  • Bachelor's degree in Computer Science/Mathematics  or equivalent practical experience.

  • Experience in one or more programming languages such as Python,  Java or C++, and working with datasets using SQL.

  • Experience designing databases, defining and implementing system requirements for data collection.

  • Experience writing, maintaining and monitoring both streaming and batch ETLs operating on structured and unstructured sources.

  • Experience with  data processing software (e.g., Hadoop, Spark, Pig, Hive) and data processing algorithms

  • Familiarity with Machine Learning libraries (such as TensorFlow, Keras) or statistical analysis using Python, R.

  • Fluent English language knowledge, German is an advantage



Job Type

Client Payroll


Positions

Backend Developer


Must have Skills

  • Hadoop

    Beginner

  • SQL

    Beginner

  • C++

    Beginner

  • Java (All Versions)

    Beginner

  • Python

    Beginner

Up to 450 K/Year USD (Annual salary)

Longterm (Duration)

Fully Remote

Skip

Evgeniya T

| Austria