New offer - be the first one to apply!

October 22, 2025

Senior Software Engineer, TPU Performance

Senior • On-site

$197,000 - $291,000/yr

Mountain View, CA

Minimum qualifications:

  • Bachelor’s degree or equivalent practical experience.
  • 8 years of experience in software development.
  • 5 years of experience with software development in one or more programming languages.
  • 3 years of experience testing, maintaining, or launching software products, and 1 year of experience with software design and architecture.
  • 3 years of experience with ML infrastructure (e.g., model deployment, model evaluation, optimization, data processing, debugging).
  • Experience working with Graphics Processing Unit (GPU) or Tensor Processing Unit (TPU).

Preferred qualifications:

  • Master’s degree or PhD in Engineering, Computer Science, or a related technical field.
  • 8 years of experience with data structures/algorithms.
  • Experience in low-level GPU programming (CUDA, OpenCL, etc.) and performance tuning techniques.
  • Experience with machine learning, compiler optimization, code generation, and runtime systems for GPU architectures (OpenXLA, MLIR, Triton, etc).
  • Understanding of modern GPU architectures (NVIDIA, AMD), memory hierarchies, and performance bottlenecks.
  • Ability to develop and utilize sophisticated performance models and benchmarks to guide optimization efforts and hardware roadmap decisions.

About the job

Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward.

In this role, you will develop ML performance analysis and optimization technology to advance the latest TPU platform to market leading performance. You will work on Gemini, as well as industry leading open-source models, to understand model architecture and optimize the performance of these ML models on TPU systems for both JAX and PyTorch platforms. You will improve the performance of ever-evolving ML workloads, achieving results. These fundamental efforts will influence next-gen TPU architectures via strategic partnerships, ensuring top performance for Gemini and OSS ML models.

The ML, Systems, & Cloud AI (MSCA) organization at Google designs, implements, and manages the hardware, software, machine learning, and systems infrastructure for all Google services (Search, YouTube, etc.) and Google Cloud. Our end users are Googlers, Cloud customers and the billions of people who use Google services around the world.

We prioritize security, efficiency, and reliability across everything we do - from developing our latest TPUs to running a global network, while driving towards shaping the future of hyperscale computing. Our global impact spans software and hardware, including Google Cloud’s Vertex AI, the leading AI platform for bringing Gemini models to enterprise customers.

The US base salary range for this full-time position is $197,000-$291,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities

  • Build optimizations that improve benchmarks, but also power Google's most critical products and services, impacting billions of users worldwide and driving significant Cloud business.
  • Shape the entire TPU software stack through influencing model design, optimizing low-level kernels and compilers (XLA, OpenXLA, JAX, PyTorch), and bridging the gap between model developers and hardware for optimal co-design and performance.
  • Address the most challenging performance bottlenecks and explore groundbreaking optimization techniques through Google’s unparalleled access to the latest generation of TPUs, tooling, and a decade of experience building AI accelerators.
  • Collaborate with some of the resourceful minds in ML, compiler design, and systems architecture through internal and external partnerships, as well as open-source projects.