Cybersecurity Landscape Analyst
OpenAI
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
Cybersecurity Landscape Analyst at OpenAI
The Intelligence and Investigations team at OpenAI seeks to rapidly identify and mitigate abuse and strategic risks to ensure a safe online ecosystem, collaborating closely with internal and external partners. This contributes to OpenAI's overarching goal of developing AI that benefits humanity.
The Strategic Intelligence & Analysis (SIA) team provides safety intelligence for OpenAI’s products by monitoring, analyzing, and forecasting real-world abuse, geopolitical risks, and strategic threats. Their work informs safety mitigations, product decisions, and partnerships, ensuring OpenAI’s tools are deployed securely and responsibly across critical sectors.
About The Role
OpenAI is seeking a Cybersecurity Landscape Analyst to help understand the evolving external cyber threat environment and its implications for products, customers, and the broader AI ecosystem.
This is an outward-facing intelligence and analysis role. The Cybersecurity Landscape Analyst monitors emerging attacker TTPs, threat-group behaviors, infrastructure trends, and real-world cyber innovation at the intersection of AI and all cyber threat surfaces, including devices and robotics. Utilizing structured research, competitive intelligence, adversarial thinking, and scenario analysis, this role stress-tests assumptions about how frontier AI capabilities could be misused, targeted, or integrated into broader cyber campaigns—even without active warnings or internal incidents.
This role does not involve internal investigations, platform data detection, or ownership of OpenAI’s infrastructure protection or incident response. Instead, it translates the external cyber landscape into clear risk context, strategic foresight, and decision support for internal stakeholders, with defined handoffs to operational, detection, and security teams. While not owning these functions, the role collaborates closely across cross-functional teams, drawing on their operational perspectives to sharpen external analysis and providing external insights, threat trends, and attacker innovation to inform priorities and preparedness. This role bridges external intelligence and internal execution, ensuring a bi-directional flow between strategic cyber analysis and implementation teams. Work will synthesize signals from external sources with insights from Integrity, Security, and Safety Systems teams to produce crisp strategic assessments, priority questions, and actionable recommendations.
In this role, you will:
- Monitor and interpret the evolving cyber threat landscape
- Track emerging cyber TTPs, attacker innovation, threat-group behavior, and ecosystem-level shifts relevant to AI systems.
- Analyze how state actors, criminal networks, hacktivists, and hybrid actors adapt AI tools or target AI infrastructure.
- Identify structural risk patterns affecting AI providers, customers, and downstream sectors.
- Conduct structured external research and adversarial analysis
- Use competitive intelligence, red-team style thinking, and scenario methods to explore how frontier AI capabilities could be exploited or targeted.
- Develop forward-looking assessments of cyber threat evolution over 6–24 months.
- Surface “unknown unknowns” and stress-test assumptions about attacker incentives, constraints, and capabilities.
- Translate external signals into strategic risk context for cross-functional teammates
- Produce concise, executive-ready intelligence estimates articulating threat relevance, potential impact, and confidence levels.
- Develop priority questions and structured risk frames to inform product, safety, security, and policy decision-making.
- Benchmark OpenAI’s risk posture against real-world incidents affecting other AI providers and adjacent technology sectors.
- Support product and ecosystem readiness
- Contribute to product reviews and safety readiness processes by outlining plausible cyber-enabled misuse or targeting modes grounded in external analysis.
- Help shape practical mitigation considerations, with clear handoffs to operational and security teams that own implementation.
- Represent OpenAI in sensitive external engagements
- Serve as a credible analytical counterpart in engagements with external partners.
- Communicate OpenAI’s threat perspective and align on shared risk trends and emerging threat vectors.
- Support collaboration that complements—without duplicating—incident response, investigations, or core security operations functions.
You might thrive in this role if you:
- Have significant experience (typically 5+ years) in cybersecurity intelligence, strategic threat analysis, trust & safety, or national-level cyber risk assessment.
- Demonstrate deep familiarity with cyber threat actors, intrusion tradecraft, vulnerability exploitation trends, and cybercrime ecosystems.
- Have experience translating external threat reporting and OSINT into structured risk assessments and executive guidance.
- Are comfortable using adversarial thinking and foresight methodologies (e.g., horizon scanning, scenario planning, red-teaming) to explore emerging threat vectors.
- Can clearly distinguish between intelligence analysis and operational security work, and work effectively across that boundary.
- Are an excellent, credible communicator capable of distilling complex cyber threat dynamics into crisp, decision-relevant insights.
- Currently hold or are eligible for a U.S. security clearance.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. They push the boundaries of AI systems' capabilities and safely deploy them through their products. AI is a powerful tool requiring safety and human needs at its core. To achieve their mission, OpenAI values diverse perspectives, voices, and experiences.
OpenAI is an equal opportunity employer committed to providing reasonable accommodations to applicants with disabilities. Background checks will be administered in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act for US-based candidates.
Key skills/competency
- Cybersecurity Intelligence
- Strategic Threat Analysis
- OSINT (Open-Source Intelligence)
- Adversarial Thinking
- Risk Assessment
- AI Security
- Geopolitical Risk
- Cybercrime Ecosystems
- Scenario Planning
- Executive Communication
How to Get Hired at OpenAI
- Research OpenAI's mission: Study their dedication to beneficial AI, values, and safety principles.
- Tailor your resume: Highlight extensive experience in cybersecurity intelligence, strategic threat analysis, and AI implications.
- Showcase adversarial thinking: Emphasize your ability to use foresight methodologies and stress-test assumptions.
- Demonstrate deep cyber expertise: Articulate familiarity with TTPs, threat actors, and vulnerability exploitation trends.
- Prepare for communication: Practice distilling complex cyber threat dynamics into clear, executive-ready insights.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background