20 days ago

AI Security Engineer

Uber

On Site
Full Time
$190,000
New York, NY
Apply

Job Overview

Job TitleAI Security Engineer
Job TypeFull Time
Offered Salary$190,000
LocationNew York, NY

Who's the hiring manager?

Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Uncover Hiring Manager

Job Description

About The Role

As AI systems-especially agentic and autonomous AI-become deeply embedded in our products and internal platforms, the security model must evolve. Traditional application security alone is no longer sufficient. We are looking for an AI Red Team Engineer to help us proactively identify, understand, and mitigate AI-native and agent-specific security risks before they reach production.

In this role, you will build and execute adversarial red-teaming exercises against AI models and AI agents, focusing on how they can be manipulated into unsafe, unintended, or harmful behavior. You will work closely with AI platform teams, product engineers, and security partners to stress-test agent logic, tool usage, memory, and autonomy-and translate findings into concrete guardrails and defenses.

This role is ideal for someone who enjoys thinking like an attacker, understands modern AI systems, and wants to work at the intersection of security, AI, and real-world impact.

What the Candidate Will Do

This role sits at the intersection of offensive security and AI engineering. You will not be limited to traditional penetration testing; instead, you will focus on behavioral, logical, and contextual attacks that cause AI systems to fail in subtle but dangerous ways-often without exploiting classic vulnerabilities. Success in this role means uncovering "unknown unknowns," clearly articulating risk, and helping teams build safer AI systems by design.

  • Design and execute AI red-teaming exercises against LLMs and AI agents, including:
    • prompt injection (direct & indirect)
    • jailbreaking and policy bypass
    • model and tool poisoning
    • memory and context poisoning
    • behavioral drift and unsafe autonomy
    • tool misuse and emergent privilege escalation
  • Analyze agent workflows, logic, and tool graphs to identify systemic security weaknesses beyond prompt-level attacks.
  • Develop reusable adversarial test cases, attack libraries, and red-team playbooks for AI systems.
  • Collaborate with AI platform and product teams to translate red-team findings into actionable mitigations, guardrails, and design changes.

Key Qualifications

Basic Qualifications
  • 3+ years of experience in security engineering, offensive security, red teaming, or AI security.
  • Hands-on experience red-teaming AI models or AI agents, including testing for prompt injection, jailbreaks, unsafe behavior, Excessive agency, Model DoS.
  • Strong understanding of security fundamentals (threat modeling, secure design, least privilege, defense in depth).
  • Ability to clearly document findings and communicate risk to both technical and non-technical stakeholders
  • Proficiency in at least one programming language (e.g., Python, Go, Java, or similar)
Preferred Qualifications
  • Familiarity with AI security tools and frameworks (e.g., PyRIT, AgentDojo, Promptfoo, custom harnesses).
  • Good understanding of GenAI and LLM architectures, including: embeddings, RAG, or agent frameworks.
  • Hands-on experience executing AI Red Teaming exercises, including prompt injection/jailbreaking, unsafe behavior/behavioral drift, model/tool poisoning.
  • Offensive security / penetration testing background (e.g., red team, bug bounty, exploit development).

Key skills/competency

  • AI Security
  • Red Teaming
  • Prompt Injection
  • LLMs
  • Agentic Systems
  • Offensive Security
  • Threat Modeling
  • Python
  • Vulnerability Analysis
  • Risk Assessment

Tags:

AI Security Engineer
AI Security
Red Teaming
LLM Security
Prompt Injection
Penetration Testing
Offensive Security
Cybersecurity
Python
Agentic Systems

Share Job:

How to Get Hired at Uber

  • Tailor your resume: Highlight experience in AI security, red teaming, and relevant programming languages.
  • Showcase AI expertise: Emphasize hands-on experience with AI models, LLMs, and agent frameworks.
  • Demonstrate security fundamentals: Detail your knowledge of threat modeling, secure design, and defense-in-depth strategies.
  • Articulate risk clearly: Prepare examples of communicating technical findings to diverse audiences.
  • Practice AI attack scenarios: Be ready to discuss prompt injection, jailbreaking, and other AI-specific vulnerabilities.

Frequently Asked Questions

Find answers to common questions about this job opportunity

Explore similar opportunities that match your background