New offer - be the first one to apply!

August 12, 2025

Senior Engineering Analyst, AI Safety, Trust and Safety

Senior • On-site

$147,000 - $216,000/yr

Seattle, WA

Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 5 years of experience in data analysis, including identifying trends, generating summary statistics, and drawing insights from quantitative and qualitative data.
  • 5 years of experience managing projects and defining project scope, goals, and deliverables.

Preferred qualifications:

  • Master’s degree or PhD in a quantitative discipline (e.g., Computer Science, Statistics, Mathematics, Operations Research etc.).
  • 5 years of experience in a data analysis or data science setting, or 3 years of experience with a PhD degree.
  • Experience in abuse and fraud disciplines focused on web security, harmful content moderation and threat analysis.
  • Experience in programming languages (e.g., Python, R, Julia), database languages (e.g., SQL) and scripting languages (e.g., C/C++, Python, Java).
  • Experience with prompt engineering and fine-tuning LLMs.
  • Proficient in applying machine learning techniques to large datasets.

About the job

Fast-paced, dynamic, and proactive, YouTube’s Trust & Safety team is dedicated to making YouTube a safe place for users, viewers, and content creators around the world to create, and express themselves. Whether understanding and solving their online content concerns, navigating within global legal frameworks, or writing and enforcing worldwide policy, the Trust & Safety team is on the frontlines of enhancing the YouTube experience, building internet safety, and protecting free speech in our ever-evolving digital world.

The AI Safety Protections team within Trust and Safety develops and implements AI/Large Language Model (LLM) powered solutions to ensure the safety of generative AI across Google's products. This includes safeguards for consumer products, enterprise offerings such as Vertex AI/AI Studio/Flow, on-device applications, as well as foundational models (e.g., Gemini, Juno, Veo) in collaboration with Google DeepMind. We are a team of passionate data scientists and machine learning experts dedicated to identifying and mitigating risks associated with generative AI.

In this role, you will have the opportunity to apply the latest advancements in AI/LLM, work with teams developing AI technologies, as well as protect the world from real-world harms.This role works with sensitive content or situations and may be exposed to graphic, controversial, or upsetting topics or content.

At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

The US base salary range for this full-time position is $147,000-$216,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities

  • Develop scalable safety solutions for AI products across Google by leveraging advanced machine learning and AI techniques.
  • Apply statistical and data science methods to thoroughly examine Google's protection measures, uncover potential shortcomings, and develop actionable insights for continuous security enhancement.
  • Partner with various stakeholders to define key objectives, formulate relevant questions, and establish methodologies for extracting critical insights from data to address issues.
  • Drive business outcomes by crafting compelling data stories for a variety of stakeholders, including executive leadership.