New offer - be the first one to apply!

August 5, 2025

Machine Learning Hardware Architect, Accelerator

Senior • On-site

$156,000 - $229,000/yr

Mountain View, CA

Minimum qualifications:

  • Bachelor's degree in Electrical Engineering, Computer Engineering, Computer Science, a related field, or equivalent practical experience.
  • 8 years of experience of silicon core architectural domains, including computer architecture, TPU or parallel processor architecture (VPU/DSP), micro-architecture and silicon design.



Preferred qualifications:

  • Master's degree or PhD in Electrical Engineering, Computer Engineering or Computer Science, with an emphasis on computer architecture.
  • Experience in architecting and designing machine learning hardware IP in SoCs for machine learning networks.
  • Experience collaborating cross-functionally with product management, SoC architecture, IP design and verification, ML algorithm and software development teams.
  • Experience in algorithms for machine learning accelerators and compute cores.
  • Experience in micro architecture, power and performance optimization.
  • Experience in interconnect/fabric, caching and security architectures.

About the job

Be part of a team that pushes boundaries, developing custom silicon solutions that power the future of Google's direct-to-consumer products. You'll contribute to the innovation behind products loved by millions worldwide. Your expertise will shape the next generation of hardware experiences, delivering unparalleled performance, efficiency, and integration. Google's mission is to organize the world's information and make it universally accessible and useful. Our team combines the best of Google AI, Software, and Hardware to create radically helpful experiences. We research, design, and develop new technologies and hardware to make computing faster, seamless, and more powerful. We aim to make people's lives better through technology.

The US base salary range for this full-time position is $156,000-$229,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities

  • Develop TPU (Tensor Processing Unit) architecture for next-generation tensor SOC to boost performance, power efficiency and area optimization based on machine learning workload analysis.
  • Define the product roadmap for machine learning accelerator capabilities on System on a Chip (SoCs) for various Google devices by collaborating with Google research and silicon product management teams.
  • Drive hardware Internet Protocol (IP) architecture specifications into design implementation for SoCs by partnering with core IP design teams across global sites.
  • Align with SoC architects and system or experience architects to address dynamic power, performance, and area requirements at the SoC level for multimedia and Artificial intelligence (AI) use cases and experiences.
  • Define and deliver hardware IP architecture specifications that meet power, performance, area and image quality goals, while owning the process through tape-out and product launch.