Safeguards Analyst, Human Exploitation and Abuse
Anthropic
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About The Role
As a Safeguards Analyst focusing on human exploitation and abuse, you will be responsible for building and executing enforcement workflows that detect and mitigate the use of our products to facilitate human trafficking, sextortion, image-based sexual abuse, bullying, and harassment. As a member of the user well-being team, your initial focus will be on standing up detection, review, and escalation workflows for this domain — from tuning classifiers and curating evaluation datasets through to managing external partnerships and real-world harm escalation pathways. This position may later expand to include broader areas of user well-being enforcement. Safety is core to our mission, and you'll help shape policy enforcement so that our users can interact with and build on top of our products across all surfaces in a harmless, helpful, and honest way.
Important context
In this position, you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. There is also an on-call responsibility across the Policy and Enforcement teams.
Responsibilities
- Design and architect automated enforcement systems and review workflows for human exploitation and abuse, ensuring they scale effectively while maintaining high accuracy.
- Partner with Product, Engineering, and Data Science teams to build and tune detection signals for human trafficking, sextortion, and image-based sexual abuse, and to develop custom mitigations for these sensitive policy areas.
- Curate policy violation examples, maintain golden evaluation datasets, and track enforcement actions across both consumer and API surfaces.
- Conduct deep-dive investigations into suspected exploitation activity — using SQL and other data analysis tools to surface threat patterns and bad-actor behavior in large datasets — then produce clear, well-sourced intelligence reports that inform detection strategy and surface policy gaps to the Safeguards policy design team.
- Study trends internally and in the broader ecosystem — including evolving trafficking and sextortion tactics — to anticipate how AI systems could be misused for exploitation as capabilities advance.
- Review and investigate flagged content to drive enforcement decisions and policy improvements, exercising careful judgment on the line between permitted adult content and exploitative material.
- Build and maintain relationships with external intelligence partners — including hotlines, NGOs, and industry hash-sharing consortia — to inform our approach and enable appropriate real-world escalation.
Required
- 3+ years of experience in trust and safety, content moderation, counter-exploitation work, or a related field.
- Subject matter expertise in one or more of: human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse / non-consensual intimate imagery, or commercial sexual exploitation.
- Experience building or operating detection and review workflows for sensitive content, at a platform, NGO, hotline, or similar organization.
- Ability to use SQL, Python, and/or other data analysis tools to interact with large datasets and derive insights that support key decisions and recommendations.
- Demonstrated ability to analyze complex situations and make well-reasoned decisions under pressure.
- Sound judgment in distinguishing permitted content from exploitative content, and comfort working in areas where these lines require careful reasoning.
- Strong attention to detail and ability to maintain accurate documentation.
- Ability to collaborate with team members while navigating rapidly evolving priorities and workstreams.
Preferred
- Familiarity with the NGO and industry ecosystem working on these harms (for example, Polaris Project, Thorn, NCMEC, IWF, StopNCII, or industry hash-sharing initiatives).
- Experience conducting open-source investigations or threat actor profiling in a trust & safety, intelligence, or law enforcement context.
- Experience working with generative AI products, including writing effective prompts for content review and enforcement.
- A deep interest in AI safety and responsible technology development.
- Experience standing up real-world harm escalation pathways or working with law enforcement referral processes.
Compensation
The annual compensation range for this role is $245,000—$285,000 USD.
Logistics
- Education requirements: Bachelor's degree in a related field or equivalent experience.
- Location-based hybrid policy: Expect to be in one of our offices at least 25% of the time. Some roles may require more time in offices.
- Visa sponsorship: We do sponsor visas, with reasonable effort for offers made.
Application Guidance
We encourage you to apply even if you do not believe you meet every single qualification. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting their strength of candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications, making representation even more important. We strive to include a range of diverse perspectives on our team.
Your safety matters to us. Anthropic recruiters only contact you from @anthropic.com email addresses. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information. If unsure, do not click links; visit anthropic.com/careers directly.
How We're Different
At Anthropic, we believe the highest-impact AI research will be big science. We work as a single cohesive team on large-scale research efforts, valuing impact and advancing our goals of steerable, trustworthy AI. We view AI research as an empirical science, with commonalities in physics and biology. We are an extremely collaborative group, hosting frequent research discussions to ensure we pursue the highest-impact work. Communication skills are highly valued. Our research directions are best understood by reading our recent publications, which continue many of our team's prior work (e.g., GPT-3, Circuit-Based Interpretability, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences).
Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space.
Key skills/competency
- Safeguards Analyst
- Human Exploitation and Abuse
- Trust and Safety
- Content Moderation
- Counter-Exploitation
- SQL
- Python
- Policy Enforcement
- AI Safety
- Risk Assessment
How to Get Hired at Anthropic
- Research Anthropic's mission: Understand their commitment to safe and beneficial AI systems.
- Tailor your resume: Highlight experience in trust & safety, content moderation, and counter-exploitation.
- Showcase technical skills: Emphasize proficiency in SQL, Python, and data analysis tools.
- Prepare for interviews: Be ready to discuss complex decision-making and sensitive content handling.
- Demonstrate AI interest: Express your passion for AI safety and responsible technology.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background