New offer - be the first one to apply!

December 17, 2025

Data Engineer

Senior β€’ Remote

$45 - $55/hr

New York, NY

πŸ” Key Info


πŸ“ Work mode: 100% Remote (with minimum 6h overlap with NYC hours: 12:00–20:00 CEST; ideally 14:00–22:00 CEST)

πŸ•’ Contract type: B2B

πŸ’Ό Experience level: Senior

πŸ’° Rate: 45-55 $ net + VAT / hour

⏳ Start date: ASAP

πŸ“† Project length: Long-term cooperation



🏒 About the Company

We collaborate exclusively with a stable US-based client, a global leader in electronic trading platforms that has operated for over 25 years. The company serves the world’s leading asset managers, central banks, hedge funds, and other institutional investors β€” facilitating around 30 trillion USD in trades every month across its electronic marketplaces.


πŸ“Œ About the Role

We’re looking for a Senior Data Engineer!

You will be building cutting edge data platforms that ingest, manage and process data from all of our client businesses. The platform will have to accommodate a wide range of use cases from simple customer facing data APIs to large scale machine learning models.



πŸ’Ό Your Responsibilities

Build and run data platform using such technologies as public cloud infrastructure (AWS and GCP), Kafka, Spark, databases and containers

Develop data platform based on open source software and Cloud services

Build and run ETL pipelines to onboard data into the platform, define schema, build DAG processing pipelines and monitor data quality.

Help develop machine learning development framework and pipelines

Manage and run mission crucial production services.


βœ… Key Competencies

β€’ Strong eye for detail, data precision, and data quality.

β€’ Strong experience maintaining system stability and responsibly managing releases.

β€’ Considerable production operations and support experience.

β€’ Clear and effective communicator who is able to liaise with team members and end-users on requirements and issues.

β€’ Agile, self-starter who is able to responsibly see things through to completion with minimal assistance and oversight.

β€’ Expert level grasp of SQL and databases/persistence technologies such as MySQL, PostgreSQL, SQL Server, Snowflake, Redis, Presto, etc

β€’ Strong grasp of Python and related ecosystems such as conda or pip.

β€’ Experience building ETL and stream processing pipelines using Kafka, Spark, Flink, Airflow/Prefect, etc

β€’ Experience with using AWS/GCP (S3/GCS, EC2/GCE, IAM, etc), Kubernetes and Linux in production.

β€’ Experience with parallel and distributed computing

β€’ Strong proclivity for automation and DevOps practices and tools such as Gitlab, Terraform, Prometheus.

β€’ Experience with managing increasing data volume, velocity and variety.

β€’ Ability to deal with ambiguity in a changing environment.

β€’ English: B2+/C1 level


βž• Nice to Have

β€’ Familiarity with data science stack: e.g. Jupyter, Pandas, Scikit-learn, Pytorch, MLFlow, Kubeflow etc

β€’ Development skills in Java, Go, or Javascript

β€’ Software builds and packaging on MS Windows

β€’ Experience managing time series data

β€’ Familiarity with working with open source communities

β€’ Financial Services experience




🎁 What We Offer

β€’ Competitive daily rate (B2B)

β€’ Hardware & setup budget (e.g., standing desk, laptop, monitors, coworking space)

β€’ Optional integration trips (New York / London / Warsaw) β€” 3–4 days, covered by the company


🎯 Recruitment Process

β€’ 2–3 stages (fully remote)

β€’ Technical interviews (including live coding or code discussion)

β€’ HR interview

β€’ Fast decision-making process