New offer - be the first one to apply!

December 16, 2025

Hardware Architecture Modeling Engineer, PhD, University Graduate

Mid • On-site

$132,000 - $189,000/yr

Sunnyvale, CA

Minimum qualifications:

  • PhD degree in Electrical Engineering, Computer Engineering, Computer Science, a related field, or equivalent practical experience
  • Experience in any one domain of computer engineering or silicon engineering through internships, academic research, or publications (e.g., co-design, digital design, architecture).
  • Experience programming in C++.

Preferred qualifications:

  • Research or internship experience in AI/ML hardware acceleration.
  • Experience with publications in peer-reviewed journals and conferences.
  • Ability to demonstrate significant understanding of relevant domains such as architecture, digital design, and performance.
  • Excellent problem-solving and communication skills, with the ability to work effectively in a team environment.

About the job

In this role, you’ll work to shape the future of AI/ML hardware acceleration. You will have an opportunity to drive cutting-edge TPU (Tensor Processing Unit) technology that powers Google's most demanding AI/ML applications. You’ll be part of a team that pushes boundaries, developing custom silicon solutions that power the future of Google's TPU. You'll contribute to the innovation behind products loved by millions worldwide, and leverage your design and verification expertise to verify complex digital designs, with a specific focus on TPU architecture and its integration within AI/ML-driven systems.

As a Hardware Architecture Modeling Engineer, you will work with hardware and software architects to model, analyze, and define next-generation Tensor Processing Units (TPUs).

The AI and Infrastructure team is redefining what’s possible. We empower Google customers with breakthrough capabilities and insights by delivering AI and Infrastructure at unparalleled scale, efficiency, reliability and velocity. Our customers include Googlers, Google Cloud customers, and billions of Google users worldwide.

We're the driving force behind Google's groundbreaking innovations, empowering the development of our cutting-edge AI models, delivering unparalleled computing power to global services, and providing the essential platforms that enable developers to build the future. From software to hardware our teams are shaping the future of world-leading hyperscale computing, with key teams working on the development of our TPUs, Vertex AI for Google Cloud, Google Global Networking, Data Center operations, systems research, and much more.

The US base salary range for this full-time position is $132,000-$189,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities

  • Develop architectural and micro architectural models to enable quantitative analysis.
  • Conduct performance and power analyses and quantitatively evaluate proposals.
  • Contribute to Machine Learning workload characterization, benchmarking, and hardware-software co-design.
  • Collaborate with partners in hardware design, software, compiler, Machine Learning (ML) model and Research teams for hardware/software codesign.
  • Propose capabilities and next-generation TPUs and chip roadmap, and contribute to TPU chip specifications.