Strategic Risk Analyst
OpenAI
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
About The Team
The Intelligence and Investigations team at OpenAI rapidly identifies and mitigates abuse and strategic risks to ensure a safe online ecosystem. We are dedicated to identifying emerging abuse trends, analyzing risks, and working with internal and external partners to implement effective mitigation strategies to protect against misuse. Our efforts contribute to OpenAI's overarching goal of developing AI that benefits humanity.
We are building a horizontal “radar” for AI abuse and strategic risk—correlating internal signals, external intelligence, and real-world events into clear, actionable priorities for OpenAI’s safety and product decision-makers.
About The Role
As a Strategic Risk Analyst, you will help develop and maintain our central view of strategic risk across OpenAI’s products and platforms. You will synthesize internal abuse patterns, upstream and external intelligence, and product and conversational signals into decision-ready risk insights, recurring briefs, and practical prioritization inputs.
You will partner closely with investigators, engineers, and policy and trust and safety counterparts, as well as measurement and forecasting teammates, to translate messy signals into structured judgments (including assumptions and confidence), ranked priorities, and actionable recommendations. This is an opportunity to do high-leverage analysis in a fast-moving environment, where crisp thinking and communication directly shape safety decisions, mitigations, and product readiness.
In this role, you will:
- Monitor and analyze internal risk signals (abuse telemetry, investigations outputs, model and product signals) to identify trends, shifts in tactics, and new abuse patterns.
- Conduct upstream and external scanning (OSINT, ecosystem developments, real-world events) and distill implications for OpenAI’s products and threat landscape.
- Identify and deep dive into harms and misuse across products and channels, turning messy signals into clear analytic findings.
- Connect individual incidents into system-level narratives about actors, incentives, product design weaknesses, and cross-product spillover—pressure-testing hypotheses early.
- Produce concise, decision-ready risk briefs and intelligence estimates with explicit assumptions, confidence levels, and what would change the assessment.
- Convert analysis into clear, ranked priorities and actionable recommendations that product, safety, and policy teams can execute on.
- Define and track key risk indicators and outcome metrics to evaluate whether mitigations are working and drive course corrections when needed.
- Build early-warning and monitoring capabilities with data, engineering, and visualization partners, including dashboards that highlight leading indicators and unusual changes.
- Contribute to product readiness and launch reviews; develop reusable playbooks, FAQs, and briefing materials that help teams respond consistently.
- Drive cross-functional alignment by tailoring readouts to investigations, engineering, policy, trust and safety, and product stakeholders—and ensuring decisions and follow-ups are crisp.
You might thrive in this role if you have:
- Significant experience (typically 5+ years) in trust and safety, integrity, security, policy analysis, or intelligence work.
- Demonstrated ability to analyze complex online harms and AI-enabled misuse (e.g., harassment, coordinated abuse, scams, synthetic media, influence operations, brand safety issues) and convert analysis into concrete, prioritized recommendations.
- Strong analytical craft: you can identify weak signals, form hypotheses, test them quickly, state assumptions explicitly, and communicate confidence and uncertainty clearly.
- Comfort working across qualitative and quantitative inputs, including (1) casework, incident reports, OSINT, product context, and policy frameworks, and (2) basic metrics and trends in partnership with data science (e.g., harm prevalence, severity profiles, exposure, escalation rates).
- Strong adversarial and product intuition: you can anticipate how actors may adapt AI and creative tools for misuse, and evaluate how product mechanics, incentives, and UX decisions shape risk.
- Experience designing and using risk frameworks and taxonomies (e.g., harm classification schemes, severity/likelihood matrices, prioritization models) to structure ambiguity and support decision-making.
- Proven ability to work cross-functionally with product, engineering, data science, operations, legal, and policy teams—pushing for clarity on tradeoffs and driving follow-through on mitigation work.
- Excellent written and verbal communication skills, including producing concise, executive-ready briefs and explaining sensitive, complex issues in grounded, concrete terms.
- Comfort operating in fast-changing, ambiguous environments: you can prioritize under uncertainty, iterate quickly, and adjust as the product and threat landscape evolves.
- A builder mindset: you like creating reusable workflows and artifacts (dashboards, playbooks, FAQs, briefing materials) and using modern tools, including OpenAI’s, to scale rigorous analysis.
Key skills/competency
- Strategic Risk Analysis
- Abuse Mitigation
- Online Harms Identification
- OSINT & Intelligence Gathering
- Policy & Trust & Safety
- Cross-functional Collaboration
- Data Synthesis & Interpretation
- Risk Framework Design
- AI Misuse Analysis
- Threat Landscape Monitoring
How to Get Hired at OpenAI
- Research OpenAI's mission: Study their dedication to ensuring AI benefits humanity and their safety principles.
- Tailor your resume: Highlight experience in strategic risk analysis, trust & safety, and intelligence work for AI products.
- Showcase analytical craft: Prepare to demonstrate your ability to identify weak signals, form hypotheses, and communicate uncertainty clearly.
- Emphasize cross-functional collaboration: Provide examples of partnering with engineering, product, and policy teams to drive mitigation efforts.
- Prepare for AI abuse scenarios: Anticipate questions on identifying and mitigating online harms and AI-enabled misuse.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background