AI Emerging Risks Analyst
OpenAI
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
About The Team
The Intelligence and Investigations team at OpenAI focuses on rapidly identifying and mitigating abuse and strategic risks. This ensures a safe online ecosystem in close collaboration with internal and external partners, contributing to OpenAI's goal of developing AI that benefits humanity.
The Strategic Intelligence & Analysis (SIA) team specifically provides safety intelligence for OpenAI’s products by monitoring, analyzing, and forecasting real-world abuse, geopolitical risks, and strategic threats. Their work informs safety mitigations, product decisions, and partnerships, ensuring secure and responsible deployment of OpenAI’s tools across critical sectors.
About The Role: AI Emerging Risks Analyst
OpenAI is seeking an AI Emerging Risks Analyst to understand potential harms and misuse of AI amidst rapid change. This role involves scanning available signals and utilizing strategic foresight methodologies to enable proactive detection and mitigation of threats, from known actors misusing new tech to entirely new AI-enabled risks.
You will provide strategic-level perspective on evolving risk areas, producing actionable risk taxonomies relevant to OpenAI’s platforms, surfaces, and broader business interests. Using mixed quantitative and qualitative methodologies, you will identify early warning signs, investigate concerning behavior, and transform weak signals into clear, prioritized risk calls. Your focus will be on upstream ecosystem scanning, competitive benchmarking, and external narrative/risk sense-making. This work will guide cross-functional partners in protection and safety to implement mitigations, keeping users, brands, and communities safe while fostering productive, creative AI uses.
Key Responsibilities
- Map and Prioritize Emerging Risks: Build and continuously refine a clear picture of emerging signals and trends affecting the AI ecosystem through upstream and external scanning. Design and maintain harm taxonomies providing foresight on how AI harms and misuse may manifest over 0-24 months and beyond. Contribute to an evergreen risk register and prioritization framework, surfacing top issues by severity, prevalence, exposure, and trajectory.
- Detect and Deep Dive into Emerging Abuse Patterns: Create comprehensive approaches to horizon scanning, competitive benchmarking, and external narrative/risk sense-making. Stay current on abuse trends, connecting individual incidents into system-level stories about actors, incentives, and product design weaknesses, often hypothesizing them before they manifest on OpenAI's surfaces.
- Turn Analysis into Actionable Risk Intelligence: Translate findings into clear, ranked risk lists and concrete mitigation proposals for product, safety, and policy teams. Work with Global Affairs and Communications to reinforce OpenAI’s leadership in online safety. Track mitigation effectiveness and advocate for course corrections when data demands it.
- Build Early Warning and Measurement Capabilities: Help define core metrics and signals for AI environment safety (e.g., key harm prevalence, severity distributions, escalation rates, brand safety issues). Collaborate with data science and visualization colleagues to shape monitoring views and dashboards, highlighting leading indicators and unusual changes. Pioneer new uses of OpenAI’s own technologies to scale detection and transform workflows.
- Provide Strategic Analysis and Future-Looking Perspectives: Produce concise, comprehensive strategic intelligence estimates with confidence levels to inform judgments and recommendations. Run scenario analyses exploring AI harm evolution (e.g., scam networks with agentic AI, state actor misuse of frontier models). Help design and run tabletop exercises for internal and partner audiences, distilling risks and identifying mitigations. Benchmark OpenAI’s risk profile against external incidents and platforms.
- Shape Safety Readiness for New Products: Contribute to product readiness and launch reviews by outlining expected abuse modes based on broad, upstream understanding. Translate risk insights into practical guidance for internal teams (product, marketing, partnerships, comms) and external partners. Develop reusable frameworks, playbooks, FAQs, and briefing materials for organizational understanding and consistent response to AI risks.
Qualifications
You might thrive in this role if you have:
- Significant experience (typically 5+ years) in trust and safety, integrity, security, policy analysis, or intelligence, focused on emerging risks translated into actionable intelligence.
- Demonstrated ability to analyze complex online harms (e.g., harassment, coordinated abuse, scams, influence operations, brand safety issues) and convert all-source analysis into concrete, prioritized recommendations.
- Strong analytical skills and comfort with both qualitative and quantitative inputs: casework, incident reports, OSINT, product context, policy frameworks, and basic metrics/trends.
- Strong adversarial and product intuition, able to foresee how actors might misuse AI tools and evaluate how product mechanics, incentives, and UX decisions influence risk.
- Experience designing and using risk frameworks and taxonomies (e.g., harm classification schemes, severity/likelihood matrices, prioritization models) to structure ambiguous spaces and support decision-making.
- Understanding of foresight methodologies including horizon scanning, scenario planning, tabletop exercises, or simulations.
- Proven ability to work cross-functionally with product, engineering, data science, operations, legal, and policy teams, including pushing for clarity on tradeoffs and following through on mitigation work.
- Excellent written and verbal communication skills, including producing concise, executive-ready briefs and explaining sensitive issues in grounded terms.
- Comfort operating in fast-changing, ambiguous environments: identifying weak signals, forming hypotheses, testing quickly, and adjusting as the landscape evolves.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. They push the boundaries of AI capabilities and safely deploy them through their products. OpenAI believes AI is a powerful tool requiring safety and human needs at its core, valuing diverse perspectives to achieve its mission.
Key skills/competency
- AI Safety & Ethics
- Risk Analysis & Mitigation
- Strategic Foresight
- Threat Intelligence
- Policy Development
- Online Harms Investigation
- Quantitative & Qualitative Analysis
- Cross-functional Collaboration
- Scenario Planning
- Product Safety Lifecycle
How to Get Hired at OpenAI
- Research OpenAI's mission: Study their commitment to beneficial AI, safety principles, and recent product launches.
- Tailor your resume for AI safety: Highlight experience in trust & safety, risk analysis, and AI policy alignment.
- Showcase analytical prowess: Prepare examples of complex online harm analysis and strategic intelligence translation.
- Demonstrate cross-functional collaboration: Emphasize instances of working with product, engineering, and policy teams.
- Articulate foresight capabilities: Be ready to discuss horizon scanning, scenario planning, and risk framework application.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background