New offer - be the first one to apply!
October 2, 2025
Senior • Hybrid • On-site • Remote
$224,000 - $356,500/yr
Santa Clara, CA , +1
NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people.
Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world. We are in search of a Senior Deep Learning Systems Engineer to propel NVIDIA’s Autonomous Vehicles project forward. In this role, you will build and scale training libraries and infrastructure that make end-to-end autonomous driving models possible. By enabling training on thousands of GPUs and massive datasets, you will accelerate iteration speed and improve safety, working closely with research and platform teams across NVIDIA.
What you’ll be doing:
Crafting, scaling, and hardening deep learning infrastructure libraries and frameworks for training on multi-thousand GPU clusters.
Improving efficiency throughout the training stack: data loaders, distributed training, scheduling, and performance monitoring.
Building robust training pipelines and libraries to handle massive video datasets and enable rapid experimentation.
Collaborating with researchers, model engineers, and internal platform teams to enhance efficiency, minimize stalls, and improve training availability.
Owning core infrastructure components such as orchestration libraries, distributed training frameworks, and fault-resilient training systems.
Partnering with leadership to ensure infrastructure scales with growing GPU capacity and dataset size while maintaining developer efficiency and stability.
What we need to see:
BS, MS, or PhD in Computer Science, Electrical/Computer Engineering, or a related field, or equivalent experience.
12+ years of professional experience building and scaling high-performance distributed systems, ideally in ML, HPC, or large-scale data infrastructure.
Extensive knowledge in deep learning frameworks (PyTorch is preferred), large scale training (DDP/FSDP, NCCL, tensor/pipeline parallelism), and performance profiling.
Strong systems background: datacenter networking (RoCE, IB), parallel filesystems (Lustre), storage systems, schedulers (Slurm, Kubernetes, etc.).
Proficiency in Python and C++, with experience writing production-grade libraries, orchestration layers, and automation tools.
Ability to work closely with multi-functional teams (ML researchers, infra engineers, product leads) and translate requirements into robust systems.
Ways to stand out from the crowd:
Shown experience scaling large GPU training clusters with >1,000 GPUs.
Contributions to open-source ML systems libraries (e.g., PyTorch, NCCL, FSDP, schedulers, storage clients).
Expertise in fault resilience and high availability, including elastic training and large-scale observability.
Tried leadership skills as a hands-on technical authority, encouraging others and establishing guidelines for ML systems engineering.
Familiarity with reinforcement learning (RL) at scale, particularly in the context of simulation-heavy workloads.
You will also be eligible for equity and benefits.