Security Analyst, Cloud AI Abuse
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
About The Role: Security Analyst, Cloud AI Abuse
Trust & Safety team members at Google are dedicated to identifying and tackling critical problems that challenge the safety and integrity of Google's products. This involves leveraging technical expertise, strong problem-solving abilities, user insights, and proactive communication to protect users and partners from abuse across various Google products such as Search, Maps, Gmail, and Google Ads. This role requires a strategic, big-picture thinker with a passion for ethical practices, working globally and cross-functionally with Google engineers and product managers to combat abuse and fraud with urgency. The ultimate goal is to promote trust in Google and ensure the highest levels of user safety daily.
The Trust and Safety Cloud AI team specifically leads Google Cloud AI security and safety. Their mission is to safeguard users, uphold platform integrity, and ensure the responsible deployment of AI products on a global scale. Operating at the intersection of AI research and real-world security, the team develops foundational defenses to prevent the misuse of generative models and agents. By pioneering proactive threat detection and robust mitigation strategies, they enable Google Cloud to advance AI innovation safely, ethically, and securely.
At Google, earning user trust is paramount. The Trust & Safety team comprises experts dedicated to making the internet safer, partnering with teams across Google to deliver innovative solutions against malware, spam, and account hijacking. This diverse team of Analysts, Policy Specialists, Engineers, and Program Managers works to reduce risk and combat abuse across all Google products, protecting users, advertisers, and publishers worldwide in over 40 languages.
Minimum Qualifications
- Bachelor's degree or equivalent practical experience.
- 2 years of experience in data analysis, including identifying trends, generating summary statistics, and drawing insights from quantitative and qualitative data.
- Experience with SQL.
Preferred Qualifications
- Master's degree in a technical discipline (e.g., Computer Science, Statistics, Mathematics, Operations Research, etc.).
- 5 years of relevant work experience in data analysis.
- Experience in security threats or abuse detection.
- Knowledge or experience in any one of the domains, such as anomaly detection, security threats analysis and investigation, time-series analysis, Cloud APIs, or metrics and reporting.
- Understanding of generative AI technologies, Large Language Models (LLMs) and AI agents.
- Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.
Responsibilities
- Monitor Google Cloud AI products for signs of abuse, including prompt injection, jailbreaking, data poisoning, distillation, and generation of policy-violating content.
- Perform in-depth analysis of risks associated with both generative and agentic AI. Measure these risks using benchmarking, evaluations, red teaming, and scaled usage monitoring.
- Develop, tune, and deploy rules, heuristics, and rate limits to proactively block abusive actors and mitigate automated attacks.
- Effectively collaborate with engineering, product, and legal teams to ensure that the risks of AI are understood and robust solutions are adopted.
- Educate cross-functional teams about Gen AI safety risks and advocate for secure design principles. Promote a culture of safety and user trust throughout the product development process.
Benefits
In accordance with Washington state law, Google highlights its comprehensive benefits package available to all eligible US based employees for this full-time position. The US base salary range is $126,000-$181,000 + bonus + equity + benefits. Benefits include:
- Health, dental, vision, life, disability insurance
- Retirement Benefits: 401(k) with company match
- Paid Time Off: 20 days of vacation per year, accruing at a rate of 6.15 hours per pay period for the first five years of employment
- Sick Time: 40 hours/year (statutory, where applicable); 5 days/event (discretionary)
- Maternity Leave (Short-Term Disability + Baby Bonding): 28-30 weeks
- Baby Bonding Leave: 18 weeks
- Holidays: 13 paid days per year
Note: Compensation details reflect base salary only and do not include bonus, equity, or additional benefits.
Key skills/competency
- Data Analysis
- SQL
- Security Threats
- Abuse Detection
- Generative AI (Gen AI)
- Large Language Models (LLMs)
- Anomaly Detection
- Cloud APIs
- Risk Assessment
- Problem-Solving
How to Get Hired at Google
- Research Google's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor.
- Tailor your resume for AI security: Customize your resume to highlight data analysis, SQL, AI abuse detection, and relevant Cloud AI experience.
- Showcase your technical expertise: Emphasize your experience with anomaly detection, LLMs, and threat investigation in your application materials.
- Prepare for behavioral interviews: Practice articulating how you've solved complex problems, collaborated cross-functionally, and promoted trust in previous roles.
- Network effectively: Connect with current Google employees in Trust & Safety or AI security on LinkedIn to gain insights and potential referrals.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background