Sr. BigData Engineer
- 5+ years experience in a Data Engineering role with an emphasis on managing Data Warehouse
- Strong skills in Python, Git, Docker, SQL, Airflow, ETL pipelines
- Familiarity with at least one of Hive, Presto, Snowflake, AWS Redshift, BigQuery
- AWS Cloud experience is a must while Azure or GCP is there then good to have!
- A passion for programming and solving problems with code
- A bachelor's degree in Computer Science/Software Engineering or equivalent industry experience
- A love for technology, and an insatiable curiosity for new tools to tackle real problems
- The developer must have sound knowledge of Apache Spark and Python programming.
- Deep experience in developing data processing tasks using pySpark such as reading data from external sources, merge data, perform data enrichment and load into target data destinations.
- Experience in deployment and operationalizing the code is added advantage
- Design and build high performing and scalable data processing systems to support multiple internal and 3rd party data pipelines
- Write Python/Spark jobs for data transformation, aggregation, ETL, and Machine Learning.
- Tuning pyspark jobs and performance optimization
- Responsible for Design, Coding, Unit Testing, and other SDLC activities in a big data environment
- Requirement gathering and understanding, Analyze and convert functional requirements into concrete technical tasks and able to provide reasonable effort estimates
- Work proactively, independently, and with global teams to address project requirements and articulate issues/challenges with enough lead time to address project delivery risks
- Exposure to Elastic Search, Solr is a plus
- Exposure to NoSql Databases Cassandra, MongoDB
- Exposure to Serverless computing
- Must have a minimum of 3 years of hands-on experience in Spark/python with an overall development experience of 4-8 years in RDBMS systems
- Experience with integration of data from multiple data sources (RDBMS, API)
- In-Depth knowledge of python and Spark components, the ecosystem is a must
- Strong knowledge in distributed systems and a solid understanding of Big Data Systems in the Hadoop Ecosystem.
- Experience in developing and deploying large-scale distributed applications. al skills
- Experience in Microservices CI/CD (Jenkins, Nexus, etc.) would be a preferred
Job Type
Payroll
Refer a friend for this role and earn
12.25 USD
Use the share options below Learn More
Refer a friend for this role and earn {{(JobDetailByID.referral_fee > 0) ? getExchangeDecimalRateData((JobDetailByID.referral_fee/4)): getExchangeDecimalRateData(49/4) | number : 0 }} {{currency_code}}
Don’t forget to share your referral URL
19 - 36 K/Year USD (Annual salary)
Longterm (Duration)
Fully Remote
Manish N