New offer - be the first one to apply!

April 8, 2026

Senior Data Engineer (Python & Databricks)

Senior • Remote

140 - 160 PLN/hr

Warsaw, Poland

In Cyclad we work with top international IT companies in order to boost their potential in delivering outstanding, cutting-edge technologies that shape the world of the future. We are seeking an experienced Senior Data Engineer with Python and Databricks.

This role supports a large-scale transformation from SQL Server–based systems to a Databricks / Delta Lake platform. The focus is on enterprise-grade data engineering and software development, not analytics or reporting. The project is SQL2Databricks migration, it involves 3500-4000 SQL DBs (2TB), replicating data in different shapes/ schemas to Databricks.

Project information:

  • Type of project: IT Services

  • Office location: Poland

  • Work model: Remote from Poland

  • Budget: 140 - 160 PLN net/ h - b2b

  • Project length: till the end of 2026, possible to extend it

  • Only candidates with citizenship in the European Union and residence in Poland

  • Start date: ASAP

Project scope:

  • Support a large-scale transformation from SQL Server–based systems to a Databricks / Delta Lake platform

  • Transform complex, business-critical SQL logic (stored procedures) into clean, maintainable, and scalable Python / PySpark code

  • Redesign and implement this logic in Python / PySpark within Databricks

  • Contribute to a large, long-running data engineering codebase used by multiple teams

  • Develop production-grade transformation code (packages, modules, reusable components)

  • Design and evolve data models within a Medallion Architecture (Bronze / Silver / Gold) across multiple data layers

  • Ensure software engineering quality, reusability, and long-term maintainability

  • Apply software engineering best practices (clean code, OOP, modularization, refactoring)

  • Work with very large data volumes and highly parallel, event-driven transformations

  • Actively participate in code reviews and technical design discussions

  • Support orchestration workflows (e.g., Azure Data Factory)

Competence demands:

  • Very strong Python and PySpark skills; proven experience with Databricks and Delta Lake

  • Experience working in large, shared codebases (beyond notebooks)

  • Strong SQL skills, especially reading and understanding complex logic

  • Solid object-oriented programming experience, clean code principles

  • Strong data modelling background (transactional and analytical)

  • Experience in redesigning models during platform migrations

  • Familiarity with layered data architectures (Bronze / Silver / Gold)

  • Very good English skills

Nice to have:

  • Azure Data Factory (orchestration)

  • Azure DevOps, Git, CI/CD pipelines

  • Power BI or analytics tooling

  • Infrastructure / DevOps knowledge (not mandatory)

We offer:

  • Remote working model

  • Dynamic and innovation-driven engineering environment

  • Full-time job agreement based on b2b

  • Private medical care with dental care (covering 70% of costs)

  • Multisport card (also for an accompanying person)

  • Life insurance