Minimum qualifications:
- Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.
- 15 years of experience in engineering, including experience with technical innovation.
- Experience with designing and operating large-scale distributed systems.
- Experience in Applied AI/LLMs and Safety.
Preferred qualifications:
- Advanced degree in Computer Science or a related technical field.
- Experience with architecting and delivering cloud-native solutions at scale.
- Experience providing technical guidance and enabling teams to deliver complex, cross-functional engineering projects.
- Experience motivating and influencing engineering teams by creating a shared sense of outlook and purpose.
- Experience as an executive-level technical leader.
- An entrepreneurial and strategic mindset, with the ability to thrive at the intersection of technology and business and shape high-level technical roadmaps.
About the job
As a Distinguished Engineer for AI Safety, you will be the most executive technical leader responsible for setting the outlook and driving the technical architecture for how Google ensures the safety and reliability of its next-generation AI systems. You will lead the charge to answer critical questions in partnership with Google DeepMind (GDM), Cloud ML, Core Data, Trust and Safety, and other key stakeholders.
In this role, you will be and executive individual contributor. You will operate at the intersection of direct technical expertise and forward-looking strategy. You will be responsible for architecting and building novel systems from the ground up, including autonomous AI agents for safety evaluations, advanced model introspection capabilities, and robust testing frameworks.
Beyond internal safeguards, you will shape the high-level conversation and technical strategy for sharing AI Safety research and tools externally with the broader community, working with research teams in GDM, Privacy, Safety, and Security, and other organizations. You will be a strategic thinker and direct leader who grows in ambiguity, provides technical guidance across many teams, and is passionate about solving critical AI Safety challenges.
The Core team builds the technical foundation behind Google’s flagship products. We are owners and advocates for the underlying design elements, developer platforms, product components, and infrastructure at Google. These are the essential building blocks for excellent, safe, and coherent experiences for our users and drive the pace of innovation for every developer. We look across Google’s products to build central solutions, break down technical barriers and strengthen existing systems. As the Core team, we have a mandate and a unique opportunity to impact important technical decisions across the company.
The US base salary range for this full-time position is $349,000-$485,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Responsibilities
- Define the technical outlook and lead the architecture for autonomous AI agents designed for proactive risk identification, internal evaluations, and stress-testing our models for potentially harmful emergent behaviors.
- Architect and build foundational systems for model interpretability, enabling us to understand and predict complex AI behavior and decision-making processes without human intervention.
- Lead the design of core introspection capabilities within our AI platforms, enabling them to self-analyze their behavior, decisions, and potential vulnerabilities to steer the output towards safer answers that ensure resilience and trustworthiness.
- Develop and implement countermeasures to protect our proprietary models from misuse, including advanced model distillation prevention and anti-scraping technologies.
- Drive the strategic conversation and roadmap for sharing AI Safety research and tools externally, defining the architecture for AI-powered safety platforms and the strategic relationship with Cloud and the open-source community.