Senior Analyst, Content Adversarial Red Team
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
Benefits at Google
In accordance with Washington state law, Google offers a comprehensive benefits package to all eligible US-based employees. These benefits include:
- Health, dental, vision, life, disability insurance
- Retirement Benefits: 401(k) with company match
- Paid Time Off: 20 days of vacation per year, accruing at a rate of 6.15 hours per pay period for the first five years of employment
- Sick Time: 40 hours/year (statutory, where applicable); 5 days/event (discretionary)
- Maternity Leave (Short-Term Disability + Baby Bonding): 28-30 weeks
- Baby Bonding Leave: 18 weeks
- Holidays: 13 paid days per year
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Washington D.C., DC, USA; Austin, TX, USA; Seattle, WA, USA.
Minimum Qualifications
- Bachelor's degree or equivalent practical experience.
- 10 years of experience in data analytics, trust and safety, policy, cybersecurity, business strategy, or a related field.
- Experience in Artificial Intelligence or Machine Learning.
Preferred Qualifications
- Master's degree or PhD in a relevant field.
- 3 years of experience in red teaming, vulnerability research or penetration testing.
- Experience working with engineering and product teams to create tools, solutions, or automation to improve user safety.
- Experience with machine learning.
- Experience in SQL, building dashboards, data collection/transformation, visualization/dashboards, or experience in a scripting/programming language (e.g., Python).
- Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.
About The Job
The Trust and Safety team at Google is dedicated to identifying and resolving the most significant challenges impacting the safety and integrity of our products. They leverage technical expertise, strong problem-solving abilities, user insights, and proactive communication to safeguard users and partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. This team seeks strategic, big-picture thinkers passionate about upholding Google's values, working globally and cross-functionally with Google engineers and product managers to combat abuse and fraud with urgency. Their work promotes trust in Google and ensures the highest levels of user safety daily.
The Content Adversarial Red Team (CART) within Trust and Safety specifically focuses on conducting unstructured adversarial testing of Google’s premier generative AI products. This is done to uncover emerging content risks that structured evaluations might miss. CART collaborates closely with product, policy, and enforcement teams to build the safest possible experiences for Google users.
In this role as a Senior Analyst, Content Adversarial Red Team, you will be instrumental in developing and driving the team’s strategic plans. You will also serve as a key advisor to executive leadership, using your cross-functional influence to advance critical safety initiatives. As an integral team member, you will mentor other analysts, fostering a continuous learning environment by sharing your deep expertise in adversarial techniques. Additionally, you will represent Google’s AI safety efforts in external forums, collaborating with industry partners to develop best practices for responsible AI and solidifying Google's position as a thought leader in the field.
At Google, earning user trust is an everyday commitment. The Trust and Safety team comprises abuse-fighting and user trust experts who work tirelessly to enhance internet safety. They partner across Google to deliver innovative solutions in areas such as malware, spam, and account hijacking, protecting users, advertisers, and publishers globally in over 40 languages. The team includes Analysts, Policy Specialists, Engineers, and Program Managers, all focused on reducing risk and combating abuse across Google’s extensive product portfolio.
Compensation
The US base salary range for this full-time position is $160,000-$237,000, plus bonus, equity, and benefits. Salary ranges are determined by role, level, and location. Individual pay within this range is influenced by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can provide specific salary range details for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits.
Responsibilities
- Lead and guide team efforts in identifying and analyzing complex content risks, with a special focus on user safety for those under 18.
- Influence cross-functional teams, including Product, Engineering, Research, and Policy, to implement safety initiatives effectively.
- Develop and deploy tailored red teaming exercises to identify emerging, unanticipated, or unknown threats.
- Drive the creation and refinement of new red teaming methodologies and strategies to build the U18 red teaming program and ensure consistency across testing.
- Design, develop, and oversee innovative red teaming strategies to uncover content abuse risks.
- Act as a key advisor to executive leadership on content safety issues, providing actionable insights and recommendations.
- This role will be exposed to graphic, controversial, or upsetting content.
Key Skills/Competency
- Generative AI
- Adversarial Testing
- Red Teaming
- Content Risk
- Machine Learning
- Cybersecurity
- Trust & Safety
- Data Analytics
- Policy Development
- Executive Advisory
How to Get Hired at Google
- Research Google's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor.
- Tailor your resume: Highlight extensive experience in AI/ML, red teaming, and content risk analysis for Google.
- Showcase impact: Quantify your achievements in improving user safety or mitigating adversarial abuse.
- Prepare for technical questions: Expect deep dives into generative AI, ML, cybersecurity, and data analysis.
- Demonstrate leadership skills: Be ready to discuss influencing cross-functional teams and advising executive leadership.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background