New offer - be the first one to apply!

October 30, 2025

Red Teaming Analyst, Trust and Safety, YouTube

Senior • On-site

$132,000 - $194,000/yr

San Bruno, CA

Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 7 years of experience in Trust and Safety, product policy, privacy and security, legal, compliance, risk management, intel, content moderation, red teaming, AI testing, adversarial testing, or similar.
  • 1 year of experience in data analytics or research, business process analysis.

Preferred qualifications:

  • Master's degree or PhD in a relevant field.
  • Experience in working with Google's products and services, particularly Generative AI products and AI systems, machine learning, and their potential risks.
  • Experience in SQL, building dashboards, data collection/transformation, visualization/dashboards, or scripting/programming language (e.g., Python).
  • Experience in using data to provide solutions and recommendations and identify emerging threats and vulnerabilities.
  • Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.
  • Excellent communication and presentation skills (written and verbal) and influence cross-functionally at various levels.

About the job

Fast-paced, dynamic, and proactive, YouTube’s Trust & Safety team is dedicated to making YouTube a safe place for users, viewers, and content creators around the world to create, and express themselves. Whether understanding and solving their online content concerns, navigating within global legal frameworks, or writing and enforcing worldwide policy, the Trust & Safety team is on the frontlines of enhancing the YouTube experience, building internet safety, and protecting free speech in our ever-evolving digital world.

The YouTube Intelligence Desk is a proactive effort within YouTube to understand emerging threats and work across the organization to mitigate them. In this role, you will look across policies and seek to understand bad actors behaviors, motivations, and tactics and identify vulnerabilities across YouTube product surfaces and leverage data to better articulate risks to the YouTube ecosystem.

At YouTube, we believe that everyone deserves to have a voice, and that the world is a better place when we listen, share, and build community through our stories. We work together to give everyone the power to share their story, explore what they love, and connect with one another in the process. Working at the intersection of technology and boundless creativity, we move at the speed of culture with a shared goal to show people the world. We explore new ideas, solve real problems, and have fun and we do it all together.

The US base salary range for this full-time position is $132,000-$194,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities

  • Experiment with and develop techniques to overcome safety features in complex, emergent AI capabilities.
  • Establish standardized, reusable frameworks that can be applied across products.
  • Develop sophisticated prompt sets and jailbreaking strategies to sufficiently test product safety, working with partner teams to leverage and evolve best practices.
  • Expand expertise and serve as a thought partner on novel testing, providing guidance to product launch owners and driving progress and alignment across Trust and Safety teams.
  • Collaborate with stakeholders across Trust and Safety to create and share new insights and approaches for testing, threat assessment, and AI safety.