4 days ago

Scaled Testing Specialist, Responsible AI

Google

On Site
Full Time
$160,000
Washington, DC

Job Overview

Job TitleScaled Testing Specialist, Responsible AI
Job TypeFull Time
CategoryCommerce
Experience5 Years
DegreeMaster
Offered Salary$160,000
LocationWashington, DC

Who's the hiring manager?

Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Uncover Hiring Manager

Job Description

About the Job

The Trust & Safety team at Google is dedicated to identifying and tackling the most significant challenges to the safety and integrity of our products. Members of this team leverage technical expertise, exceptional problem-solving abilities, user insights, and proactive communication to safeguard users and partners from abuse across Google products such as Search, Maps, Gmail, and Google Ads. This role requires a big-picture thinker and a strategic team player with a strong commitment to ethical practices. You will collaborate globally and cross-functionally with Google engineers and product managers to swiftly identify and combat abuse and fraud. Your work will directly contribute to promoting trust in Google and ensuring the highest levels of user safety.

As a Scaled Testing Specialist, Responsible AI, you will become an expert in structured and unstructured safety pre-launch testing for Google's Generative AI (GenAI) models and products, with a specific focus on the under-18 user experience. You will work closely with technical abuse-fighting experts in Trust & Safety to understand launch requirements and develop and implement robust testing protocols. Utilizing comprehensive data analysis, you will provide quantitative and qualitative actionable insights on potential risks, informing mitigation strategies for both Trust & Safety and product teams. Effective relationship building will be crucial as you manage numerous stakeholders, bringing a structured and organized approach. Your analytical thinking, data-driven decision-making, and technical know-how will enable cross-team collaboration and rapid execution, shaping the future of AI development to ensure Google's AI products are safe for all users.

Please be aware that this role may involve exposure to graphic, controversial, and/or upsetting content.

At Google, we work diligently to earn our users’ trust daily. The Trust & Safety team, comprising abuse-fighting and user trust experts, strives daily to make the internet a safer place. We collaborate with teams across Google to deliver innovative solutions in areas such as malware, spam, and account hijacking. Our team of Analysts, Policy Specialists, Engineers, and Program Managers works to mitigate risk and combat abuse across all Google products, protecting users, advertisers, and publishers globally in over 40 languages.

Responsibilities

  • Own the pre-launch U18 testing lifecycle for prominent GenAI products, from aligning on compliance standards and safety guidelines to final execution, ensuring consistency across all product areas.
  • Define prompt generation and scraping strategies using Large Language Model (LLM) tools and vendor teams to rigorously test model boundaries, compliance, and potential risks.
  • Conduct deep-dive qualitative and quantitative analyses of test results to identify edge cases and provide actionable mitigation strategies that inform critical pre- and post-launch decision-making.
  • Build reusable frameworks, operational norms, and best practices for red teaming and AI safety to scale Responsible AI (RAI) testing efficiency and impact cross-functionally.

Minimum Qualifications

  • Bachelor's degree or equivalent practical experience.
  • 7 years of experience in data analytics, Trust & Safety, policy, cybersecurity, or related fields.

Preferred Qualifications

  • Master's degree or PhD in a relevant field.
  • Education in, or experience with, machine learning.
  • Experience in SQL, building dashboards, data collection/transformation, visualization/dashboards, or experience in a scripting/programming language (e.g., Python).
  • Strong understanding of AI systems, machine learning, and their potential risks or experience working with Google's products and services, particularly GenAI products.
  • Excellent communication and presentation skills (written and verbal) and the ability to influence cross-functionally at various levels.
  • Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.

Key Skills/Competency

  • AI Safety
  • Generative AI
  • Data Analytics
  • Trust & Safety
  • Risk Mitigation
  • Testing Protocols
  • Machine Learning
  • SQL
  • Python
  • Stakeholder Management

Tags:

Scaled Testing Specialist
AI safety
Generative AI
Trust & Safety
Data analytics
Risk mitigation
Machine learning
Testing protocols
Python
SQL
Cybersecurity
Policy
Stakeholder management
Data visualization
Red teaming
LLM
User experience
Compliance
Problem-solving
Critical thinking

Share Job:

How to Get Hired at Google

  • Research Google's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor.
  • Tailor your resume for Scaled Testing Specialist roles: Highlight experience in AI safety, data analytics, and trust & safety.
  • Showcase relevant skills: Emphasize expertise in GenAI, machine learning, SQL, Python, and risk assessment.
  • Prepare for technical and behavioral interviews: Practice problem-solving, analytical thinking, and stakeholder management scenarios.
  • Demonstrate passion for user safety: Articulate your commitment to ethical AI and protecting users from abuse.

Frequently Asked Questions

Find answers to common questions about this job opportunity

Explore similar opportunities that match your background