New offer - be the first one to apply!
October 15, 2025
Senior • On-site
$284,000 - $425,500/yr
Santa Clara, CA
Join the team building the backbone of the world’s most sophisticated AI cloud
NVIDIA DGX Cloud delivers multi-exascale, GPU-accelerated computing on demand. We are looking for a senior engineering leader to own capacity strategy, fleet reliability, and operational excellence as DGX Cloud scales globally. If you thrive with large-scale infrastructure challenges and want to invent the future of AI computing, we’d love to hear from you!
What you’ll be doing:
Lead end-to-end capacity strategy and forecasting for DGX Cloud across regions and cloud partners (Azure, OCI, GCP, etc.).
Define and implement golden-image standards for DGX nodes: firmware, CUDA/NVIDIA drivers, NCCL/InfiniBand, NVLink/NVSwitch fabrics.
Invent and operate automated maintenance and upgrade frameworks with near-zero customer impact, including guardrails, rollback plans, and buffer management.
Own service-level objectives (SLOs) for GPU availability, efficiency, and training/inference reliability; drive continuous improvement and root-cause analysis.
Guide development of orchestration tools and APIs coordinated with NVIDIA tools and DGX Cloud provisioning systems.
Partner with DGX Cloud software, data-center engineering, supply chain, and finance to align capacity, cost, and rollout priorities.
Recruit, mentor, and lead an elite team of capacity engineers, SREs, and tooling developers.
What we need to see:
12+ overall years in large-scale infrastructure or site-reliability engineering, with 5+ years in senior leadership.
Bachelors or Masters in an engineering field or equivalent experience.
Deep understanding of GPU-accelerated compute, including DGX systems, NVLink/NVSwitch fabrics, InfiniBand/Ethernet networking, and high-performance storage.
Shown success in capacity planning and fleet consistency across multi-region or multi-cloud environments.
Expertise in driver/firmware management (CUDA stack, NCCL, OS/kernel dependencies) and distributed training workloads.
Proven track record to deliver against strict availability and performance SLOs at hyperscale.
Ways to stand out from the crowd:
Experience with hybrid cloud deployments and hyperscale partnerships.
Familiarity with Kubernetes GPU scheduling, and AI/ML workload patterns.
Track record of influencing hardware/system roadmaps (DGX, Grace Hopper, next-gen GPUs) based on capacity insights.
Strong interpersonal skills to align executives, engineers, and partners around ambitious capacity targets.
NVIDIA is leading the way in groundbreaking developments in Artificial Intelligence, High-Performance Computing and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables outstanding creativity and discovery, and powers what were once science fiction inventions from artificial intelligence to autonomous cars. NVIDIA is looking for extraordinary people like you to help us accelerate the next wave of artificial intelligence. NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and passionate about developing cloud services we want to hear from you!
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 284,000 USD - 425,500 USD.You will also be eligible for equity and benefits.