11 days ago

Senior Analyst, Content Adversarial Red Team

Google

On Site
Full Time
$210,000
Seattle, WA

Job Overview

Job TitleSenior Analyst, Content Adversarial Red Team
Job TypeFull Time
CategoryCommerce
Experience5 Years
DegreeMaster
Offered Salary$210,000
LocationSeattle, WA

Who's the hiring manager?

Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Uncover Hiring Manager

Job Description

Senior Analyst, Content Adversarial Red Team at Google

Trust and Safety team members at Google are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. This role requires a big-picture thinker and strategic team-player with a passion for doing what’s right, working globally and cross-functionally with engineers and product managers to fight abuse and fraud cases with urgency. Every day, you will promote trust in Google and ensure the highest levels of user safety.

About the Role

The Content Adversarial Red Team (CART) within Trust and Safety conducts unstructured adversarial testing of Google’s premier generative AI products. This is done to uncover emerging content risks not identified in structured evaluations. CART works alongside product, policy, and enforcement teams to build the safest possible experiences for Google users.

In this role, you will develop and drive the team’s strategic plans while acting as a key advisor to executive leadership, leveraging cross-functional influence to advance safety initiatives. As a member of the team, you will mentor analysts and foster a culture of continuous learning by sharing your deep expertise in adversarial techniques. Additionally, you will represent Google’s AI safety efforts in external forums, collaborating with industry partners to develop best practices for responsible AI and solidifying our position as a thought leader in the field.

Key Responsibilities

  • Lead and guide the team's efforts in identifying and analyzing high-complexity content risks, with a special focus on the safety of users under 18.
  • Influence cross-functional teams, including Product, Engineering, Research, and Policy, to drive the implementation of safety initiatives.
  • Develop and deploy tailored red teaming exercises that identify emerging, unanticipated, or unknown threats.
  • Drive the creation and refinement of new red teaming methodologies, strategies, and tactics to help build the U18 red teaming program and ensure coherence and consistency across all testing modalities.
  • Design, develop, and oversee the execution of innovative red teaming strategies to uncover content abuse risks.
  • Act as a key advisor to executive leadership on content safety issues, providing actionable insights and recommendations.

Please note: This role will be exposed to graphic, controversial, or upsetting content.

Minimum Qualifications

  • Bachelor's degree or equivalent practical experience.
  • 10 years of experience in data analytics, trust and safety, policy, cybersecurity, business strategy, or a related field.
  • Experience in Artificial Intelligence or Machine Learning.

Preferred Qualifications

  • Master's degree or PhD in a relevant field.
  • 3 years of experience in red teaming, vulnerability research, or penetration testing.
  • Experience working with engineering and product teams to create tools, solutions, or automation to improve user safety.
  • Experience with machine learning.
  • Experience in SQL, building dashboards, data collection/transformation, visualization/dashboards, or experience in a scripting/programming language (e.g., Python).
  • Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.

Benefits at Google

In accordance with Washington state law, Google highlights its comprehensive benefits package, available to all eligible US-based employees. Benefits for this role include health, dental, vision, and disability insurance, life insurance, retirement benefits (401k with company match), paid time off (20 vacation days/year), sick time, maternity leave, baby bonding leave, and 13 paid holidays per year.

Compensation

The US base salary range for this full-time position is $160,000-$237,000, plus bonus, equity, and benefits. Salary ranges are determined by role, level, and location. Individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process. Compensation details listed reflect base salary only, excluding bonus, equity, or benefits.

Key skills/competency

  • Content Adversarial Red Team
  • Generative AI Safety
  • Adversarial Testing
  • Trust and Safety
  • Machine Learning
  • Data Analytics
  • Policy Development
  • Cybersecurity
  • Vulnerability Research
  • Strategic Planning

Tags:

Senior Analyst, Trust and Safety
Adversarial Testing
Content Moderation
AI Safety
Machine Learning
Data Analytics
Policy Development
Cybersecurity
Risk Assessment
Strategic Leadership
User Protection
Python
SQL
Generative AI
Data Visualization
Cloud Platforms
Data Transformation
Statistical Analysis
Red Teaming Tools
Automation

Share Job:

How to Get Hired at Google

  • Research Google's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor.
  • Tailor your resume: Highlight adversarial testing, AI/ML, Trust & Safety, and leadership experience relevant to the Senior Analyst, Content Adversarial Red Team role.
  • Master technical skills: Demonstrate proficiency in SQL, Python, data visualization, machine learning, and red teaming methodologies specific to AI safety.
  • Prepare for behavioral questions: Emphasize problem-solving, cross-functional collaboration, influencing stakeholders, and commitment to user safety.
  • Network strategically: Connect with current Google employees, particularly within Trust & Safety or AI product teams, for insights and referrals.

Frequently Asked Questions

Find answers to common questions about this job opportunity

Explore similar opportunities that match your background