New offer - be the first one to apply!
August 1, 2025
Senior • On-site
$184,000 - $287,500/yr
Santa Clara, CA
We are now looking for a Senior Deep Learning Software Engineer, FlashInfer. NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world.
We're looking for outstanding AI systems engineers to develop groundbreaking technologies in the inference systems software stack! We build innovative AI systems software to accelerate for AI inference. As a member of the team, you'll develop libraries, code generators, and GPU kernel technologies for NVIDIA's hardware architecture. This means designing and building things like new abstractions, efficient attention kernel implementations, new LLM inference runtimes components, and kernel code generators to accelerate large language models, agents, and other high-impact AI workloads.
What you'll be doing:
Innovating and developing new AI systems technologies for efficient inference
Designing, implementing, and optimizing kernels for high impact AI workloads
Designing and implementing extensible abstractions for LLM serving engines
Building efficient just-in-time domain specific compilers and runtimes
Collaborating closely with other engineers at NVIDIA across deep learning frameworks, libraries, kernels, and GPU arch teams
Contributing to open source communities like FlashInfer, vLLM, and SGLang
What we need to see:
Masters degree in Computer Science, Electrical Engineering, or related field (or equivalent experience); PhD are preferred
6+ years (academic/ industry) experience with ML/DL systems development preferable
Strong experience in developing or using deep learning frameworks (e.g. PyTorch, JAX, TensorFlow, ONNX, etc) and ideally inference engines and runtimes such as vLLM, SGLang, and MLC.
Strong Python and C/C++ programming skills
Ways to stand out from the crowd:
Background in domain specific compiler and library solutions for LLM inference and training (e.g. FlashInfer, Flash Attention)
Expertise in inference engines like vLLM and SGLang
Expertise in machine learning compilers (e.g. Apache TVM, MLIR)
Strong experience in GPU kernel development and performance optimizations (especially using CUDA C/C++, cuTile, Triton, or similar)
Open source project ownership or contributions
You will also be eligible for equity and benefits.