Senior Researcher Safety Systems
@ OpenAI

New York, NY
$420,000
On Site
Full Time
Posted 10 hours ago

Your Application Journey

Personalized Resume
Apply
Email Hiring Manager
Interview

Email Hiring Manager

XXXXXXXXXX XXXXXXXXX XXXXXXXXX******* @openai.com
Recommended after applying

Job Details

About The Team

Safety Systems is at the forefront of OpenAI’s mission to build and deploy safe AGI. The Misalignment Research team focuses on identifying, quantifying, and understanding future AGI misalignment risks before they pose harm.

The Work Of This Research Taskforce Spans Four Pillars

  • Worst‑Case Demonstrations: Craft demos that reveal potential AI failures.
  • Adversarial & Frontier Safety Evaluations: Develop rigorous, repeatable evaluations of dangerous capabilities.
  • System‑Level Stress Testing: Build automated tools to assess end‑to‑end robustness.
  • Alignment Stress‑Testing Research: Investigate failure modes and propose improvements.

About The Role

As a Senior Researcher Safety Systems, you will design and execute cutting‑edge red‑teaming attacks, build adversarial evaluations, and advance the understanding of AI safety failures. Your work will directly influence product launches and long‑term safety roadmaps at OpenAI.

Key Responsibilities

  • Design worst‑case demonstrations for high‑stakes AGI use cases.
  • Develop adversarial and system‑level evaluations and automated red‑teaming tools.
  • Conduct research on alignment failure modes and propose improvement strategies.
  • Publish influential papers to shape internal and industry safety practices.
  • Collaborate across engineering, research, policy, and legal teams.
  • Mentor junior researchers and engineers in rigorous safety work.

Candidate Profile

You might thrive in this role if you have a passion for AI safety and red‑teaming, plus 4+ years of related experience. A strong research track record, fluency in modern ML/AI techniques, and clear communication skills are essential. Advanced academic credentials (Ph.D., master’s, or equivalent) in computer science, machine learning, or security are preferred.

What OpenAI Offers

  • Influence on AGI safety practices at a global leader.
  • Access to cutting‑edge models, tooling, and compute resources.
  • A collaborative, mission‑driven environment with top talent.
  • Competitive compensation, equity, and benefits.

Key skills/competency

  • AGI
  • Red-teaming
  • AI safety
  • Misalignment
  • Adversarial evaluation
  • Stress testing
  • Research
  • ML techniques
  • Collaboration
  • Mentorship

How to Get Hired at OpenAI

🎯 Tips for Getting Hired

  • Customize your resume: Emphasize red-teaming and AI research skills.
  • Highlight technical expertise: Showcase ML and security project experience.
  • Prepare for interviews: Practice explaining complex safety research.
  • Research OpenAI: Study their mission, values, and recent innovations.

📝 Interview Preparation Advice

Technical Preparation

Review latest ML/red-teaming techniques.
Study automated stress testing systems.
Practice coding evaluation infrastructures.
Analyze misalignment research papers.

Behavioral Questions

Describe teamwork on complex safety projects.
Explain handling failure modes effectively.
Share experience mentoring junior colleagues.
Discuss cross-functional collaboration challenges.

Frequently Asked Questions