5 days ago

AI Red-Teamer Adversarial AI Testing

Hackajob

Hybrid
Contractor
$150,000
Hybrid

Job Overview

Job TitleAI Red-Teamer Adversarial AI Testing
Job TypeContractor
CategoryCommerce
Experience5 Years
DegreeMaster
Offered Salary$150,000
LocationHybrid

Who's the hiring manager?

Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Uncover Hiring Manager

Job Description

Why This Role Exists

At Mercor, we believe the safest AI is the one that’s already been attacked — by us. We are assembling a red team for this project - human data experts who probe AI models with adversarial inputs, surface vulnerabilities, and generate the red team data that makes AI safer for our customers. This project involves reviewing AI outputs that touch on sensitive topics such as bias, misinformation, or harmful behaviors. All work is text-based, and participation in higher-sensitivity projects is optional and supported by clear guidelines and wellness resources. Before being exposed to any content, the topics will be clearly communicated.

What You’ll Do

  • Red team conversational AI models and agents: jailbreaks, prompt injections, misuse cases, bias exploitation, multi-turn manipulation
  • Generate high-quality human data: annotate failures, classify vulnerabilities, and flag systemic risks
  • Apply structure: follow taxonomies, benchmarks, and playbooks to keep testing consistent
  • Document reproducibly: produce reports, datasets, and attack cases customers can act on

Who You Are

  • You bring prior red teaming experience (AI adversarial work, cybersecurity, socio-technical probing)
  • You’re curious and adversarial: you instinctively push systems to breaking points
  • You’re structured: you use frameworks or benchmarks, not just random hacks
  • You’re communicative: you explain risks clearly to technical and non-technical stakeholders
  • You’re adaptable: thrive on moving across projects and customers

Nice-to-Have Specialties

  • Adversarial ML: jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction
  • Cybersecurity: penetration testing, exploit development, reverse engineering
  • Socio-technical risk: harassment/disinfo probing, abuse analysis, conversational AI testing
  • Creative probing: psychology, acting, writing for unconventional adversarial thinking

What Success Looks Like

  • You uncover vulnerabilities automated tests miss
  • You deliver reproducible artifacts that strengthen customer AI systems
  • Evaluation coverage expands: more scenarios tested, fewer surprises in production
  • Mercor customers trust the safety of their AI because you’ve already probed it like an adversary

Why Join Mercor

  • Build experience in human data-driven AI red teaming at the frontier of safety
  • Play a direct role in making AI systems more robust, safe, and trustworthy

Key skills/competency

  • AI Red Teaming
  • Adversarial AI
  • Vulnerability Assessment
  • Prompt Engineering
  • Cybersecurity Principles
  • Data Annotation
  • Risk Analysis
  • Technical Documentation
  • Bilingual Communication (English/Arabic)
  • Machine Learning Security

Tags:

AI Red-Teamer
Adversarial AI
AI Safety
Vulnerability Testing
Prompt Injection
Bias Exploitation
Human Data Generation
Misuse Cases
Systemic Risk
Documentation
Communication
Adversarial ML
Cybersecurity
Socio-technical Risk
RLHF/DPO
Penetration Testing
Exploit Development
Reverse Engineering
Conversational AI
Generative AI
LLMs

Share Job:

How to Get Hired at Hackajob

  • Research Mercor's mission: Study their commitment to AI safety, values, and how their red teaming efforts impact the AI industry.
  • Tailor your resume: Customize your resume to highlight adversarial AI testing, cybersecurity, and socio-technical probing experiences relevant to the AI Red-Teamer role.
  • Showcase your adversarial mindset: Prepare to demonstrate how you instinctively push systems to breaking points, employing structured frameworks, not just random hacks.
  • Prepare for language assessment: Be ready to prove native-level fluency in both English and Arabic, as it's crucial for reviewing sensitive AI outputs.
  • Discuss specific case studies: Be ready to share examples of past red teaming or cybersecurity projects where you uncovered vulnerabilities and documented findings effectively.

Frequently Asked Questions

Find answers to common questions about this job opportunity

Explore similar opportunities that match your background