New offer - be the first one to apply!

September 4, 2025

Senior Analyst, Content Adversarial Red Team

Senior • On-site

$132,000 - $194,000/yr

Austin, TX , +1


Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 7 years of experience in technology, red teaming, policy, cybersecurity, anti-abuse, Trust and Safety, or related fields.

Preferred qualifications:

  • Experience working with Google's products and services, particularly GenAI products.
  • Experience working with large datasets and data analysis tools.
  • Experience in adversarial testing, red teaming, Generative AI safety, Generative AI ethics/responsibility, or similar.
  • Understanding of AI systems, machine learning, and their potential risks.
  • Ability to think strategically and identify emerging threats and vulnerabilities.
  • Excellent problem-solving skills with a focus on data analysis.

About the job

Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.

As a Senior Analyst on the Content Adversarial Red Team (CART), you will identify and mitigate emerging content safety risks within Google's Generative AI products. You will lead the charge in uncovering unknown generative AI issues, novel threats and vulnerabilities that are not captured by traditional testing methods. Your understanding of AI safety and the ability to think strategically will be instrumental in shaping the future of AI development, ensuring that Google's AI products are safe, fair, and unbiased.

At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

The US base salary range for this full-time position is $132,000-$194,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities

  • Identify emerging threats and design highly targeted testing approaches.
  • Identify key areas for improvement and develop comprehensive program plans.
  • Be exposed to graphic, controversial, and upsetting content.
  • Partner with product managers, engineers, researchers, and other stakeholders to understand product functionality, potential vulnerabilities, and develop solutions.
  • Collaborate with team members and stakeholders to create and implement best practices for red teaming, threat assessment, and AI safety.