11 days ago

Protection Scientist Engineer

OpenAI

On Site
Full Time
$350,000
London, England, United Kingdom

Job Overview

Job TitleProtection Scientist Engineer
Job TypeFull Time
CategoryCommerce
Experience5 Years
DegreeMaster
Offered Salary$350,000
LocationLondon, England, United Kingdom

Who's the hiring manager?

Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Uncover Hiring Manager

Job Description

About the Team: Intelligence and Investigations

OpenAI's mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires real-world deployment and iteratively updating based on what we learn.

The Intelligence and Investigations team supports this by identifying and investigating misuses of our products – especially new types of abuse. This enables our partner teams to develop data-backed product policies and build scaled safety mitigations. Precisely understanding abuse allows us to safely enable users to build useful things with our products.

About The Role: Protection Scientist Engineer

The Protection Scientist Engineer role is an interdisciplinary position mixing data science, machine learning, investigation, and policy/protocol development. As a Protection Scientist Engineer within Integrity and Investigations, you will be responsible for designing and building systems to proactively identify and enforce on abuse on OpenAI’s products. This includes ensuring we have robust abuse monitoring in place for new products, sustaining monitoring for existing products, and prototyping and incubating systems of defense against our highest risk harms. You will also respond to and investigate critical escalations, especially those that are not caught by our existing safety systems. This will require expert understanding of our products and data, and involves working cross-functionally with product, policy, and engineering teams.

This role is based in our London office and includes participation in an on-call rotation that will involve resolving urgent escalations outside of normal work hours. Some investigations may involve sensitive content, including sexual, violent, or otherwise-disturbing material.

In This Role, You Will

  • Scope and implement abuse monitoring requirements for new product launches.
  • Improve processes to sustain monitoring operations for existing products, including developing approaches to automate monitoring subtasks.
  • Prototype and mature into production systems of detection, review, and enforcement of abuse for major harms.
  • Work with Product, Policy, Ops, and Investigative teams to understand key risks and how to address them, and with Engineering teams to ensure we have sufficient data and scaled tooling.

You Might Thrive In This Role If You

  • Have at least 4 years of experience doing technical analysis and detection, especially using SQL and Python.
  • Have experience in trust and safety and/or have worked closely with policy, enforcement, and engineering teams. An investigative mindset is key.
  • Have experience with basic data engineering, such as building core tables or writing data pipelines in production, and with machine learning principles and execution. Basic software development skills are a plus as this role writes productionised code.
  • Have experience scaling and automating processes, especially with language models.

Key skills/competency

  • Data Science
  • Machine Learning
  • Investigations
  • SQL
  • Python
  • Trust & Safety
  • Policy Development
  • Data Engineering
  • Automation
  • AI Systems

Tags:

Protection Scientist Engineer
Abuse detection
AI safety
Trust & Safety
Investigations
Product policy
Risk management
Monitoring systems
Enforcement
Data analysis
Cross-functional collaboration
SQL
Python
Machine Learning
Data Engineering
Data pipelines
Software development
Language Models
Automation
Production code

Share Job:

How to Get Hired at OpenAI

  • Research OpenAI's mission: Study their dedication to ensuring beneficial AI for humanity, examining their values and recent safety initiatives.
  • Highlight Trust & Safety expertise: Showcase experience in online integrity, abuse detection, content moderation, or risk mitigation.
  • Emphasize technical proficiency: Demonstrate strong skills in SQL, Python, machine learning, and data engineering for proactive defense systems.
  • Showcase an investigative mindset: Provide concrete examples of complex problem-solving and critical analysis in past roles.
  • Tailor your application: Customize your resume and cover letter to align with AI product safety, monitoring, and enforcement responsibilities.

Frequently Asked Questions

Find answers to common questions about this job opportunity

Explore similar opportunities that match your background