Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
About The Role
As AI systems, especially agentic and autonomous AI, become deeply embedded in our products and internal platforms, the security model must evolve. Traditional application security alone is no longer sufficient. We are looking for an AI Red Team Engineer to help us proactively identify, understand, and mitigate AI-native and agent-specific security risks before they reach production.
In this role, you will build and execute adversarial red-teaming exercises against AI models and AI agents, focusing on how they can be manipulated into unsafe, unintended, or harmful behavior. You will work closely with AI platform teams, product engineers, and security partners to stress-test agent logic, tool usage, memory, and autonomy, and translate findings into concrete guardrails and defenses.
This role is ideal for someone who enjoys thinking like an attacker, understands modern AI systems, and wants to work at the intersection of security, AI, and real-world impact.
What the Candidate Will Do
This role sits at the intersection of offensive security and AI engineering. You will not be limited to traditional penetration testing; instead, you will focus on behavioral, logical, and contextual attacks that cause AI systems to fail in subtle but dangerous ways—often without exploiting classic vulnerabilities. Success in this role means uncovering "unknown unknowns," clearly articulating risk, and helping teams build safer AI systems by design.
- Design and execute AI red-teaming exercises against LLMs and AI agents, including:
- prompt injection (direct & indirect)
- jailbreaking and policy bypass
- model and tool poisoning
- memory and context poisoning
- behavioral drift and unsafe autonomy
- tool misuse and emergent privilege escalation
- Analyze agent workflows, logic, and tool graphs to identify systemic security weaknesses beyond prompt-level attacks.
- Develop reusable adversarial test cases, attack libraries, and red-team playbooks for AI systems.
- Collaborate with AI platform and product teams to translate red-team findings into actionable mitigations, guardrails, and design changes.
Basic Qualifications
- 3+ years of experience in security engineering, offensive security, red teaming, or AI security.
- Hands-on experience red-teaming AI models or AI agents, including testing for prompt injection, jailbreaks, unsafe behavior, Excessive agency, Model DoS.
- Strong understanding of security fundamentals (threat modeling, secure design, least privilege, defense in depth).
- Ability to clearly document findings and communicate risk to both technical and non-technical stakeholders.
- Proficiency in at least one programming language (e.g., Python, Go, Java, or similar).
Preferred Qualifications
- Familiarity with AI security tools and frameworks (e.g., PyRIT, AgentDojo, Promptfoo, custom harnesses).
- Good understanding of GenAI and LLM architectures, including: embeddings, RAG, or agent frameworks.
- Hands-on experience executing AI Red Teaming exercises, including prompt injection/jailbreaking, unsafe behavior/behavioral drift, model/tool poisoning.
- Offensive security / penetration testing background (e.g., red team, bug bounty, exploit development).
Key skills/competency
- AI Security
- Red Teaming
- LLM Security
- Prompt Injection
- Penetration Testing
- Threat Modeling
- Python
- Offensive Security
- AI Engineering
- Risk Assessment
How to Get Hired at Uber
- Tailor your resume: Highlight AI security, red teaming, and offensive security experience using keywords from the job description.
- Showcase your skills: Quantify achievements in AI model testing, prompt injection, and risk communication.
- Prepare for technical questions: Be ready to discuss AI architectures, security fundamentals, and threat modeling.
- Demonstrate behavioral fit: Emphasize your attacker mindset, collaboration skills, and proactive approach to security.
- Network and apply: Connect with Uber security professionals on LinkedIn and express your interest.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background