New offer - be the first one to apply!
June 18, 2025
Senior • Hybrid • On-site • Remote
$184,000 - $287,500/yr
Are you an experienced systems architect with an interest in advancing artificial intelligence (AI) and high-performance computing (HPC) in academic and research environments? We are looking for a Solutions Architect to join the higher education and research team! In this role you will work with universities and research institutions to optimize the design and deployment of AI infrastructure. Our team applies expertise in accelerated software and hardware systems to help enable groundbreaking advancements in AI, deep learning, and scientific research. This role requires a strong background in building and deploying research computing clusters, deploying AI and HPC workloads, and optimizing system performance at scale.
What you’ll be doing:
Collaborate as a key technical advisor for the design, build-out, and optimization of university-level research computing infrastructures powering GPU-accelerated scientific workflows
Work with university research computing to optimize hardware utilization with software orchestration tools such as NVIDIA Base Command, Kubernetes, Slurm, and Jupyter notebook environments
Implement systems monitoring and management tools to help optimize resource utilization, and gain insight into most demanding application workloads at research computing centers
Document what you learn. This can include building targeted training, writing whitepapers, blogs, and wiki articles, and working through hard problems with a researcher on a whiteboard
Collaborate with researchers to gather feature requests and product feedback for product and engineering teams
What we need to see:
BS/MS/PhD in Engineering, Mathematics, Physical Sciences, or Computer Science (or equivalent experience).
8+ years of relevant work experience
Strong experience in designing and deploying GPU-accelerated computing infrastructure
In-depth knowledge of cluster orchestration and job scheduling technologies, e.g. Slurm, Kubernetes, Ansible and/or Open OnDemand
Experience with container tools (Docker, Singularity) including at-scale deployment of containerized environments
Expertise in systems monitoring, telemetry, and systems performance optimization of research computing environments. Familiarity with tools such as Prometheus, Grafana, and NVIDIA DCGM
Understanding of datacenter networking technologies (InfiniBand, Ethernet, OFED) and experience with network configuration
Familiarity with power and cooling systems architecture for data center infrastructure
Ways to stand out from the crowd:
Background in deploying LLM training and inference workflows in a research computing environment
Experience deploying and evaluating cluster performance using benchmarks such as MLPerf and/or HPL.
Experience in delivering technical training, workshops, or mentoring researchers on using HPC/AI systems.
Applications and systems-level knowledge of OpenMPI and NCCL
Experience with debugging and profiling tools. E.g. Nsight Systems, Nsight Compute, Compute Sanitizer, GDB or Valgrind
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.