New offer - be the first one to apply!

December 17, 2025

Senior Data Engineer

Senior • Remote

$28,500 - $35,500/

Warsaw, Poland

We are a team of experts bringing together top talents in IT and analytics. Our mission is to build high-performing data and technology teams — from the ground up or by scaling existing ones — to help our partners become truly data-driven organizations.


Currently, we are looking for a Senior Data Engineer to join the Global Analytics Unit, a global, centralized team driving data-driven decision-making and developing smart data products that power everyday operations across markets.


The team’s essence is innovation — we foster a data-first mindset across all business areas, from sales and logistics to marketing and procurement. As the team expands its analytics solutions globally, we’re seeking a hands-on engineer who combines strong software craftsmanship with a passion for building scalable, automated data systems.



Why join us

If you want to:

  • Work on complex, large-scale data systems with global impact

  • Build robust and scalable data pipelines using modern cloud-native architectures

  • Contribute to innovative projects and see your ideas implemented

  • Work in a diverse global team of top-tier engineers and data professionals

  • Have the freedom to shape tools, technologies, and processes

  • Operate in a culture that values autonomy, collaboration, and technical excellence

...then this role is for you.


We also offer:

  • Flexible working hours and remote work options

  • A relaxed, non-corporate environment with no unnecessary bureaucracy

  • Private medical care and Multisport card

  • A modern office in central Warsaw with great transport links

  • Access to global learning resources, certifications, and knowledge exchange



Your role

As a Senior Data Engineer, you will:

  • Design, develop, and maintain end-to-end data pipelines and architectures for large-scale analytics solutions

  • Refactor code and REST services to support dynamic OpCo deployments and multi-tenant scalability

  • Develop zero-touch deployment pipelines to automate infrastructure and environment provisioning

  • Implement data validation and testing frameworks ensuring reliability and accuracy across data flows

  • Integrate new pipelines into a harmonized data execution portal

  • Build and maintain serverless and Databricks-based data processing systems on Azure

  • Design and optimize ETL/ELT workflows in Python (including PySpark)

  • Implement Infrastructure as Code (IaC) for reproducible deployments using tools like Terraform

  • Collaborate closely with Data Scientists, Architects, and Analysts to deliver production-grade data products

  • Troubleshoot, monitor, and improve data pipelines to ensure performance and resilience



What we expect

  • 5+ years of professional experience as a Data Engineer or Software Engineer in data-intensive environments

  • Strong Python development skills, with solid understanding of OOP, modular design, and testing (unit/integration)

  • Experience with PySpark and distributed data processing frameworks

  • Hands-on experience with Azure Data ecosystem, including Databricks, Data Factory, Synapse, and serverless compute

  • Solid knowledge of SQL and database performance optimization

  • Experience in CI/CD and DevOps practices for data pipelines (GitHub Actions, Azure DevOps, or similar)

  • Proven ability to refactor complex systems and implement scalable, automated solutions

  • Experience in data testing, validation, and observability frameworks

  • Strong communication skills and the ability to work independently in a global, collaborative team environment

  • Fluent English


Nice to have

  • Experience with DBT (Data Build Tool) for transforming and testing data in analytics pipelines

  • Experience with Terraform or other Infrastructure as Code tools

  • Familiarity with containerization and orchestration (Docker, Kubernetes)

  • Understanding of data governance and metadata management principles

  • Experience in multi-tenant or multi-market system design



In short

We’re looking for a software-minded Data Engineer — someone who writes clean, testable Python, designs systems that scale globally, and loves automating everything that can be automated. You’ll have a real impact on the architecture and delivery of global analytics solutions used daily across multiple markets.





Please add to your CV the following clause:

"I hereby agree to the processing of my personal data included in my job offer by hubQuest spółka z ograniczoną odpowiedzialnością located in Warsaw for the purpose of the current recruitment process.”


If you want to be considered in the future recruitment processes please add the following statement:

"I also agree to the processing of my personal data for the purpose of future recruitment processes.”