AI Security Penetration Tester / AI Red Team Engineer
Jobs via Dice
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
AI Security Penetration Tester / AI Red Team Engineer
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Virtuous Tech Inc, is seeking a highly skilled professional to lead offensive security engagements focused on AI/ML-powered applications and platforms.
Position Overview
In this role, you will identify, exploit, and demonstrate security risks across traditional and AI-specific attack surfaces, including LLMs, AI-enabled APIs, and AI-driven business logic. You will work alongside Engineering, Security, Red Teams, SOC, and AI research teams to simulate realistic AI attacks and guide remediation strategies.
Key Responsibilities
- Conduct AI-focused penetration testing on web, API, mobile, and AI-powered systems.
- Perform red teaming exercises including prompt injection, jailbreak testing, model evasion, and adversarial ML attacks.
- Identify risks like model poisoning, data leakage, adversarial inputs, and AI business logic abuse.
- Execute threat modeling and architecture reviews for AI-enabled applications.
- Develop and enhance AI-focused offensive security tools and methodologies.
- Research emerging AI attack techniques for business impact assessment.
- Deliver comprehensive penetration testing reports and executive presentations.
- Lead end-to-end engagements including scoping, execution, reporting, and remediation.
- Partner with engineering teams to provide actionable security recommendations.
- Collaborate with Red Teams and SOC to improve AI security playbooks.
Required Qualifications
- 3+ years of penetration testing experience (web, API, mobile).
- Proven experience in AI red teaming, LLM security testing, or adversarial ML.
- Proficiency with tools like Burp Suite Pro, Netsparker, and Checkmarx.
- Working knowledge of AI/ML frameworks such as TensorFlow, PyTorch, LLM APIs, and LangChain.
- Strong understanding of OWASP Top 10, API security, and modern attack vectors.
- Excellent communication skills.
- Relevant security certifications (GWAPT, OSWE, OSWA, CREST, etc.) preferred.
- Bachelor’s degree in Computer Science, Cybersecurity, or equivalent experience.
Preferred Qualifications
- Experience testing LLM-based applications, chatbots, copilots, or AI workflows.
- Familiarity with MLOps, model deployment security, and cloud AI platforms (AWS, Azure, Google Cloud Platform).
- Ability to build custom offensive tools/scripts in Python, Go, or similar languages.
- Exposure to SOC operations, detection engineering, or purple team exercises.
- Contributions to AI security research, blogs, talks, or open-source projects.
What Success Looks Like
- AI vulnerabilities identified before production release.
- Clear demonstration of AI attack paths and business risk.
- Actionable remediation guidance adopted by engineering teams.
- Continuous evolution of AI red teaming methodologies.
- Measurable improvement in the enterprise AI security posture.
Key Skills/Competency
- Penetration Testing
- Red Teaming
- AI Security
- Adversarial ML
- LLM Security
- Threat Modeling
- Vulnerability Assessment
- Security Tools
- Risk Analysis
- Remediation
How to Get Hired at Jobs via Dice
- Customize your resume: Highlight AI security and red teaming skills.
- Research Dice: Understand their tech expert community.
- Leverage certifications: Emphasize relevant security certificates.
- Showcase project experience: Detail real-world AI security testing examples.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background