1 day ago

Security Analyst, Cloud AI Abuse

Google

On Site
Full Time
$160,000
Seattle, WA

Job Overview

Job TitleSecurity Analyst, Cloud AI Abuse
Job TypeFull Time
CategoryCommerce
Experience5 Years
DegreeMaster
Offered Salary$160,000
LocationSeattle, WA

Who's the hiring manager?

Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Uncover Hiring Manager

Job Description

About the Job

Trust & Safety team members at Google identify and tackle critical problems challenging product safety and integrity. They leverage technical expertise, strong problem-solving skills, user insights, and proactive communication to shield users and partners from abuse across Google products such as Search, Maps, Gmail, and Google Ads. This role demands a big-picture thinker and strategic team player passionate about doing what’s right, working globally and cross-functionally with Google engineers and product managers to combat abuse and fraud with urgency. Your daily work will promote trust in Google and ensure the highest levels of user safety.

The Trust and Safety Cloud AI team leads Cloud AI security and safety. Our mission is to safeguard users, protect platform integrity, and ensure responsible deployment of AI products at a global scale. We operate at the intersection of AI research and real-world security, building foundational defenses to prevent the misuse of generative models and agents. By pioneering proactive threat detection and robust mitigation strategies, we enable Google Cloud to advance AI innovation safely, ethically, and securely.

At Google, we earn user trust daily. Trust & Safety is Google’s team of abuse-fighting and user trust experts dedicated to making the internet safer. We partner across Google to deliver bold solutions against malware, spam, and account hijacking. Comprising Analysts, Policy Specialists, Engineers, and Program Managers, we reduce risk and combat abuse across all Google products, protecting users, advertisers, and publishers globally in over 40 languages.

Minimum Qualifications

  • Bachelor's degree or equivalent practical experience.
  • 2 years of experience in data analysis, including identifying trends, generating summary statistics, and drawing insights from quantitative and qualitative data.
  • Experience with SQL.

Preferred Qualifications

  • Master's degree in a technical discipline (e.g., Computer Science, Statistics, Mathematics, Operations Research, etc.).
  • 5 years of relevant work experience in data analysis.
  • Experience in security threats or abuse detection.
  • Knowledge or experience in any one of the domains, such as anomaly detection, security threats analysis and investigation, time-series analysis, Cloud APIs, or metrics and reporting.
  • Understanding of generative AI technologies, Large Language Models (LLMs) and AI agents.
  • Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.

Responsibilities

  • Monitor Google Cloud AI products for signs of abuse, including prompt injection, jailbreaking, data poisoning, distillation, and generation of policy-violating content.
  • Perform in-depth analysis of risks associated with both generative and agentic AI. Measure these risks using benchmarking, evaluations, red teaming, and scaled usage monitoring.
  • Develop, tune, and deploy rules, heuristics, and rate limits to proactively block abusive actors and mitigate automated attacks.
  • Effectively collaborate with engineering, product, and legal teams to ensure that the risks of AI are understood and robust solutions are adopted.
  • Educate cross-functional teams about Gen AI safety risks and advocate for secure design principles. Promote a culture of safety and user trust throughout the product development process.

Key Skills/Competency

  • Data analysis
  • SQL
  • Security threats
  • Abuse detection
  • Generative AI
  • Large Language Models
  • Anomaly detection
  • Risk mitigation
  • Cross-functional collaboration
  • Critical thinking

Tags:

Security Analyst
data analysis
abuse detection
security threats
risk mitigation
anomaly detection
AI safety
collaboration
policy enforcement
incident response
investigation
SQL
Generative AI
Large Language Models
Cloud APIs
Machine Learning
Data Science
Python
BigQuery
TensorFlow

Share Job:

How to Get Hired at Google

  • Research Google's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor to understand Google's commitment to Trust & Safety.
  • Tailor your resume: Customize your application to highlight proven experience in data analysis, SQL proficiency, security threat detection, and understanding of AI abuse specific to the Security Analyst, Cloud AI Abuse role.
  • Showcase problem-solving skills: Prepare detailed examples demonstrating your critical thinking, analytical abilities, and proactive approach to identifying and mitigating complex security or abuse challenges in interviews.
  • Understand AI security nuances: Demonstrate specific knowledge of generative AI technologies, Large Language Models, and potential abuse vectors like prompt injection or data poisoning during technical discussions.
  • Highlight collaboration and advocacy: Be ready to discuss experiences where you effectively collaborated with cross-functional teams and advocated for secure design principles within a product development lifecycle.

Frequently Asked Questions

Find answers to common questions about this job opportunity

Explore similar opportunities that match your background