Gen AI Security Researcher
Alice (Formerly ActiveFence)
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
About Alice (Formerly ActiveFence)
Alice is a trust, safety, and security company built for the AI era. We safeguard the communicative technologies people use to create, collaborate, and interact—whether with each other or with machines. In a world where AI has fundamentally altered the nature of risk, Alice offers comprehensive coverage across the entire AI lifecycle. We support frontier model labs, enterprises, and UGC platforms with a comprehensive suite of solutions: from model hardening evaluations and pre-deployment red-teaming to runtime guardrails and ongoing drift detection.
The Role: Gen AI Security Researcher
As a Gen AI Security Researcher, you’ll dive deep into the challenges of AI safety, conducting red-teaming operations to identify vulnerabilities in generative AI systems and their infrastructure. You will conduct red-teaming operations for finding and addressing risks to ensure AI models are robust, secure, and future-proof.
Responsibilities as a Gen AI Security Researcher:
- Conduct sophisticated black-box red-teaming operations to uncover vulnerabilities in generative AI models and infrastructure.
- Design new techniques to bypass the latest AI security mechanisms.
- Evaluate and strengthen the security of AI systems, identifying weaknesses and collaborating to implement improvements.
- Work with cross-functional teams to automate security testing processes and establish best practices.
- Stay ahead of emerging trends in AI security, ethical hacking, and cyber threats to ensure we’re at the cutting edge.
Requirements:
Must Have:
- 3+ years in offensive cybersecurity, especially focused on web applications and API security OR Advanced Ph.D. Candidates with a proven record of research in AI/Cybersecurity.
- Strong programming and scripting skills (e.g., Python, JavaScript) relevant to AI security.
- In-depth understanding of AI technologies, particularly generative models like GPT, DALL-E, etc.
- Solid knowledge of AI vulnerabilities and mitigation strategies.
- Excellent problem-solving, analytical, and communication skills.
Preferred Skills That Set You Apart:
- Certifications in offensive cybersecurity (e.g., OSWA, OSWE, OSCE3, SEC542, SEC522) OR Master's degree and above in Computer Science with a focus on Data Science or AI.
- Experience in end-to-end product development, including infrastructure and system design.
- Proficiency in cloud development.
- Familiarity with AI security frameworks, compliance standards, and ethical guidelines.
- Ability to thrive in a fast-paced, rapidly evolving environment.
Key skills/competency
- Generative AI
- Cybersecurity
- Red-teaming
- AI Security
- Vulnerability Assessment
- Offensive Security
- Python
- API Security
- Cloud Development
- Ethical Hacking
How to Get Hired at Alice (Formerly ActiveFence)
- Research Alice's mission: Study their focus on AI era trust, safety, and security.
- Tailor your resume: Highlight offensive cybersecurity, Gen AI, and red-teaming experience for Alice.
- Showcase AI security expertise: Emphasize knowledge of AI vulnerabilities and mitigation strategies.
- Prepare for technical depth: Be ready to discuss generative models, API security, and ethical hacking.
- Demonstrate problem-solving: Share examples of complex AI security challenges you've tackled.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background