19 hours ago

AI Red-Teamer

YO IT Consulting

Hybrid
Full Time
$150,000
Hybrid

Job Overview

Job TitleAI Red-Teamer
Job TypeFull Time
CategoryCommerce
Experience5 Years
DegreeMaster
Offered Salary$150,000
LocationHybrid

Who's the hiring manager?

Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Uncover Hiring Manager

Job Description

Why This Role Exists

We believe the safest AI is the one that’s already been attacked — by us. That’s why we’re building a pod of AI Red-Teamers: human data experts who probe AI models with adversarial inputs, surface vulnerabilities, and generate the red-team data that makes AI safer for our customers.

This role may include reviewing AI outputs that touch on sensitive topics such as bias, misinformation, or harmful behaviors. All work is text-based, and participation in higher-sensitivity projects is optional and supported by clear guidelines and wellness resources.

What You’ll Do

  • Red-team AI models and agents: jailbreaks, prompt injections, misuse cases, exploits
  • Generate high-quality human data: annotate failures, classify vulnerabilities, and flag systemic risks
  • Apply structure: follow taxonomies, benchmarks, and playbooks to keep testing consistent
  • Document reproducibly: produce reports, datasets, and attack cases customers can act on
  • Flex across projects: support different customers, from LLM jailbreaks to socio-technical abuse testing

Who You Are

  • You bring prior red-teaming experience (AI adversarial work, cybersecurity, socio-technical probing)
  • You’re curious and adversarial: you instinctively push systems to breaking points
  • You’re structured: you use frameworks or benchmarks, not just random hacks
  • You’re communicative: you explain risks clearly to technical and non-technical stakeholders
  • You’re adaptable: thrive on moving across projects and customers

Nice-to-Have Specialties

  • Adversarial ML: jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction
  • Cybersecurity: penetration testing, exploit development, reverse engineering
  • Socio-technical risk: harassment/disinfo probing, abuse analysis
  • Creative probing: psychology, acting, writing for unconventional adversarial thinking

What Success Looks Like

  • You uncover vulnerabilities automated tests miss
  • You deliver reproducible artifacts that strengthen customer AI systems
  • Evaluation coverage expands: more scenarios tested, fewer surprises in production

Why Join US

  • Build experience in human data-driven AI red-teaming at the frontier of safety
  • Play a direct role in making AI systems more robust, safe, and trustworthy

The pay rate for this role may vary by project, customer, and content category. Compensation will be aligned with the level of expertise required, the sensitivity of the material, and the scope of work for each engagement.

We consider all qualified applicants without regard to legally protected characteristics and provide reasonable accommodations upon request.

Contract and Payment Terms

  • You will be engaged as an independent contractor.
  • This is a fully remote role that can be completed on your own schedule.
  • Projects can be extended, shortened, or concluded early depending on needs and performance.
  • Payments are weekly on Stripe or Wise based on services rendered.

Key skills/competency

  • Adversarial AI
  • Red-Teaming
  • Vulnerability Assessment
  • Prompt Injection
  • Jailbreaking
  • Cybersecurity
  • Machine Learning Security
  • Risk Analysis
  • Data Annotation
  • Technical Communication

Tags:

AI Red-Teamer
adversarial testing
vulnerability research
prompt injection
jailbreaking
data annotation
risk assessment
exploit development
penetration testing
security analysis
reporting
AI
Machine Learning
LLM
DPO
RLHF
Python
Cybersecurity
NLP
Data Science
AI Safety

Share Job:

How to Get Hired at YO IT Consulting

  • Research YO IT Consulting's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor.
  • Tailor your resume for AI red-teaming: Highlight adversarial AI testing, cybersecurity, or socio-technical probing experience with specific projects and outcomes.
  • Showcase adversarial thinking: Prepare to discuss instances where you identified and exploited system vulnerabilities in previous roles.
  • Demonstrate structured problem-solving: Be ready to explain how you apply frameworks, taxonomies, or benchmarks to testing processes.
  • Practice clear communication: Prepare to articulate complex technical risks and findings to both technical and non-technical audiences.

Frequently Asked Questions

Find answers to common questions about this job opportunity

Explore similar opportunities that match your background