7 hours ago

Technical Abuse Investigator

OpenAI

On Site
Full Time
$327,700
San Francisco, CA

Job Overview

Job TitleTechnical Abuse Investigator
Job TypeFull Time
Offered Salary$327,700
LocationSan Francisco, CA

Who's the hiring manager?

Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Uncover Hiring Manager

Job Description

About The Team

OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe achieving this goal requires real-world deployment and continuous iteration based on how our products are used—and misused—in practice.

The Intelligence and Investigations team supports this mission by detecting, investigating, and disrupting the misuse of our products, particularly critical or novel harms. Our work enables partner teams to develop data-backed model policies and build scalable safety mitigations. By precisely understanding abuse, we help ensure OpenAI’s products can be used safely to build meaningful, rewarding applications.

About The Role

As a Technical Abuse Investigator on the Intelligence and Investigations team, you will be responsible for detecting, investigating, and disrupting malicious use of OpenAI’s platform. You will further scale parts of the investigative process to help our team disrupt harm at scale. This role combines traditional investigative judgment with strong technical fluency: much of the work involves navigating complex datasets to surface actionable abuse signals, not just reviewing individual reports.

In addition to conducting investigations directly, this role is explicitly designed to act as a force multiplier for the broader investigations team. You will be scaling or automating highly manual, important and nuanced processes. You will design and implement lightweight technical solutions—such as notebook templates, data pipelines or internal utilities—that enable specialized investigators to identify, track, and action abuse at a greater scale than a single investigator can currently achieve. Success in this role is measured not only by investigations completed, but by how effectively your work enables you and your team members to operate more efficiently and consistently.

You will work closely with engineering, legal, investigations, security, and policy partners to respond to time-sensitive escalations, investigate activity that falls outside existing safeguards, and translate investigative insights into scalable detection and enforcement strategies.

This role includes participation in an on-call rotation to handle urgent escalations outside of normal work hours. Some investigations may involve sensitive content, including sexual, violent, or otherwise disturbing material. This role will work PST and is open to remote work within the United States, though we heavily prefer candidates based in San Francisco or New York.

In This Role, You Will

  • Detect, investigate and disrupt abuse and harm with policy, legal, global affairs, security, and engineering teams via complex datasets.
  • Develop and iterate on abuse signals and investigative methods, scaling one-off insights to reduce manual effort and expand coverage.
  • Build and maintain lightweight technical solutions (e.g., SQL/ Python data pipelines, investigation templates, dashboards, or internal utilities) for investigators focused on specific harm domains.
  • Develop a deep understanding of OpenAI’s products, data systems, and enforcement mechanisms, and collaborate with engineering and data teams to improve investigative tooling, data quality, and workflows.
  • Communicate investigation findings effectively to internal stakeholders through written briefs, data-backed recommendations, and escalation summaries.
  • Rotate (in-frequently) into an incident response role that requires rapid threat triaging, investigation, mitigation, sound judgement and concise briefing to senior leadership.
  • Be someone people enjoy working with.
  • Proven ability to quickly learn new processes, systems and team dynamics while thriving in ambiguous, rapidly changing, and high-pressure environments.

You Might Thrive In This Role If You

  • Have deep expertise in at least two of the following domains: agentic AI misuse; automation; encryption; terrorism; fraud; violence; child exploitation; data science; dashboarding; api abuse; product exploits, prompt injection; distillation.
  • Have 5+ years of experience investigating and mitigating abuse in a relevant domain.
  • Have 2+ years of relevant technical projects.
  • Strong presenter on safety work in public or policy settings.
  • Have experience scaling or automating processes, especially with LLMs or ML techniques.

Key skills/competency

  • Abuse Investigation
  • Data Analysis
  • Technical Solutions
  • Python
  • SQL
  • Incident Response
  • Policy Enforcement
  • AI Misuse Detection
  • Scalable Mitigation
  • Security Operations

Tags:

Technical Abuse Investigator
Abuse Detection
Incident Response
Data Analysis
Policy Enforcement
Scalable Solutions
AI Misuse
Security Operations
Trust & Safety
Fraud Detection
Python
SQL
Data Pipelines
Machine Learning
LLMs
API Abuse
Dashboarding
Enforcement Mechanisms
Threat Intelligence
Automation

Share Job:

How to Get Hired at OpenAI

  • Research OpenAI's mission: Study their dedication to general-purpose AI benefiting humanity and their focus on safe deployment.
  • Highlight technical investigation skills: Emphasize expertise in data analysis, SQL, Python, and building scalable solutions for abuse detection.
  • Showcase experience with AI misuse: Detail relevant experience in domains like agentic AI misuse, prompt injection, or API abuse.
  • Demonstrate automation and scaling: Provide examples of how you've scaled manual processes or developed technical utilities to enhance efficiency.
  • Prepare for behavioral questions: Be ready to discuss thriving in ambiguous environments, collaboration, and communicating complex findings to diverse stakeholders.

Frequently Asked Questions

Find answers to common questions about this job opportunity

Explore similar opportunities that match your background