New offer - be the first one to apply!
June 27, 2025
Senior • Hybrid • On-site • Remote
$148,000 - $235,750/yr
Santa Clara, CA
NVIDIA is seeking outstanding AI Solutions Architects to assist and support customers that are building solutions with our newest AI technology. At NVIDIA, our solutions architects work across different teams and enjoy helping customers with the latest Accelerated Computing and Deep Learning software and hardware platforms. We're looking to grow our company, and build our teams with the smartest people in the world. Would you like to join us at the forefront of technological advancement? You will become a trusted technical advisor with our customers and work on exciting projects and proof-of-concepts focused on inference for Generative AI and Large Language Models (LLMs). You will also collaborate with a diverse set of internal teams on performance analysis and modeling of inference software. You should be comfortable working in a dynamic environment, and have experience with Generative AI, LLMs and GPU technologies. This role is an excellent opportunity to work in an interdisciplinary team at NVIDIA!
What You Will Be Doing:
Partnering with other solution architects, engineering, product and business teams. Understanding their strategies and technical needs and helping define high-value solutions
Dynamically engaging with developers, scientific researchers, and data scientists, gaining experience across a range of technical areas
Strategically partnering with lighthouse customers and industry-specific solution partners targeting our computing platform
Working closely with customers to help them adopt and build creative solutions using NVIDIA technology and MLOps solutions
Analyzing performance and power efficiency of AI inference workloads on Kubernetes
Some travel to conferences and customers may be required
What We Need To See:
BS, MS, or PhD in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, other Engineering or related fields (or equivalent experience)
5+ years of hands-on experience with Deep Learning frameworks such as PyTorch and TensorFlow
Strong fundamentals in programming, optimizations, and software design, especially in Python
Proficiency in problem-solving and debugging skills in GPU orchestration and Multi-Instance GPU (MIG) management within Kubernetes environments
Experience with containerization and orchestration technologies, monitoring, and observability solutions for AI deployments
Strong knowledge of the theory and practice of LLM and DL inference
Excellent presentation, communication and collaboration skills
Ways To Stand Out From The Crowd:
Prior experience with DL training at scale, deploying or optimizing DL inference in production
Experience with NVIDIA GPUs and software libraries such as NVIDIA NIM, Dynamo, TensorRT, TensorRT-LLM
Excellent C/C++ programming skills, including debugging, profiling, code optimization, performance analysis, and test design
Familiarity with parallel programming and distributed computing platforms
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.