Security Researcher
Darktrace
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
Security Researcher at Darktrace
Darktrace is a global leader in AI for cybersecurity that keeps organizations ahead of the changing threat landscape every day. Founded in 2013, Darktrace provides the essential cybersecurity platform protecting nearly 10,000 organizations from unknown threats using its proprietary AI.
The Darktrace Active AI Security Platform™ delivers a proactive approach to cyber resilience to secure the business across the entire digital estate – from network to cloud to email. Breakthrough innovations from our R&D teams have resulted in over 200 patent applications filed. Darktrace’s platform and services are supported by over 2,400 employees around the world. To learn more, visit http://www.darktrace.com.
Job Description:
As part of our cutting-edge research team, you will play a pivotal role in advancing the security and trustworthiness of generative AI technologies. This position offers the opportunity to explore emerging threats, design innovative defenses, and shape best practices for safe and responsible AI deployment. You’ll work at the intersection of machine learning, cybersecurity, and applied research, helping to ensure that next-generation AI systems are robust, secure, and aligned with ethical standards.
This is a hybrid role, with a compulsory attendance of 2 days a week in the Cambridge office.
What will I be doing:
- Investigating trends in generative AI compliance and visibility
- Researching attacker tradecraft targeting generative AI chatbots and agentic systems
- Creating, validating, and testing detections in a research environment
- Co-ordinating with relevant development, product, and machine learning teams
- Provide detailed and actionable feedback on product performance
What experience do I need:
- Familiarity with the evolving landscape of generative AI, including popular foundation models and emerging agentic architectures
- Knowledge of common attacker methodologies targeting AI systems (e.g., prompt injection, data poisoning, inference, and extraction attacks)
- Interest in contributing to a detection engineering team focused on safeguarding AI technologies
- Strong logical reasoning and problem-solving skills, especially in unfamiliar or complex scenarios
- Ability to communicate technical concepts clearly to both technical and non-technical stakeholders.
Benefits:
- 23 days’ holiday + all public holidays, rising to 25 days after 2 years of service
- Additional day off for your birthday
- Private medical insurance which covers you, your cohabiting partner and children
- Life insurance of 4 times your base salary
- Salary sacrifice pension scheme
- Enhanced family leave
- Confidential Employee Assistance Program
- Cycle to work scheme.
Key skills/competency
- Generative AI Security
- Threat Research
- AI Defenses
- Machine Learning
- Cybersecurity
- Applied Research
- Prompt Injection
- Data Poisoning
- Detection Engineering
- Ethical AI
How to Get Hired at Darktrace
- Research Darktrace's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor.
- Tailor your resume: Highlight expertise in generative AI security, machine learning, and cybersecurity for the Security Researcher role.
- Showcase relevant projects: Detail experiences with prompt injection, data poisoning, and AI system defense strategies.
- Prepare for technical deep dives: Expect questions on AI security challenges, attacker methodologies, and detection engineering.
- Demonstrate collaborative spirit: Emphasize teamwork, problem-solving, and clear communication with technical/non-technical stakeholders.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background