Want to get hired at Crossing Hurdles?
AI Security Researcher
Crossing Hurdles
HybridHybrid
Original Job Summary
About the AI Security Researcher Role
Crossing Hurdles, a recruitment firm, is referring top candidates to world-leading AI research labs. In this role as an AI Security Researcher, you will red-team AI models and agents by crafting jailbreaks, prompt injections, misuse cases, and exploit scenarios.
Key Responsibilities
- Red-team AI models: develop jailbreaks and injection tests.
- Generate high-quality human data and annotate AI failures.
- Apply taxonomies, benchmarks, and playbooks to ensure consistent testing.
- Document findings with reproducible reports and datasets.
- Support multiple projects including LLM jailbreaks and socio-technical abuse testing.
Required Qualifications
- Prior red-teaming or cybersecurity experience, or strong AI background.
- Expertise in adversarial machine learning including RLHF/DPO attacks.
- Cybersecurity skills: penetration testing, exploit development, reverse engineering.
- Experience with socio-technical risks such as harassment and disinformation analysis.
- Creative probing skills using psychology, acting, or writing.
Application Process
- Upload your resume.
- Participate in an AI interview based on your resume (15 min).
- Submit the application form.
Key Skills/Competency
Adversarial Testing, Cybersecurity, AI, Red-Teaming, Penetration Testing, Exploit Development, Reverse Engineering, Annotation, LLM, Socio-technical Analysis
How to Get Hired at Crossing Hurdles
🎯 Tips for Getting Hired
- Research Crossing Hurdles' culture: Study their mission and recent placements.
- Customize your resume: Highlight AI adversarial and cybersecurity experience.
- Prepare for technical challenges: Review red-teaming and penetration testing examples.
- Practice interview insights: Prepare real examples and structured explanations.
📝 Interview Preparation Advice
Technical Preparation
circle
Review adversarial ML techniques.
circle
Practice penetration testing methods.
circle
Study jailbreak dataset creation.
circle
Learn prompt injection strategies.
Behavioral Questions
circle
Explain past red-teaming challenges.
circle
Describe creative problem solving instances.
circle
Share experiences with flexible deadlines.
circle
Discuss teamwork in technical projects.