AI Security Researcher
Agoda
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
About Agoda
At Agoda, we bridge the world through travel. Our story began in 2005, when two lifelong friends and entrepreneurs, driven by their passion for travel, launched Agoda to make it easier for everyone to explore the world.
Today, we are part of Booking Holdings [NASDAQ: BKNG], with a diverse team of over 7,000 people from 90 countries, working together in offices around the globe. Every day, we connect people to destinations and experiences, with our great deals across our millions of hotels and holiday properties, flights, and experiences worldwide.
No two days are the same at Agoda. Data and technology are at the heart of our culture, fueling our curiosity and innovation. If you’re ready to begin your best journey and help build travel for the world, join us.
We are looking for an AI Security Researcher with a deep understanding of modern LLMs and Generative AI systems, with a strong offensive security mindset. You will focus on breaking, hardening, and securing AI systems, and working closely with engineering teams to design and implement robust AI security solutions.
Key Responsibilities
- Design, execute, and document jailbreaks, prompt injection attacks, model evasion, data exfiltration, and other offensive techniques against chatbots and AI agents.
- Assess and attempt to compromise Model Context Protocol (MCP)–based systems and other tool-calling / plugin ecosystems.
- Build and automate security testing workflows involving multiple LLM models, APIs, and tools (e.g., Jupyter notebooks, orchestration frameworks).
- Perform offensive security testing and red teaming of AI-driven products, including API manipulation and integration abuse.
- Research and analyze security weaknesses in Large Language Models (LLMs), Generative AI systems, and their surrounding infrastructure.
- Contribute to in-house guardrail design: define, implement, and test safety and security guardrails for LLMs and AI automations.
- Propose and evaluate defensive controls: input/output filtering, policy enforcement, monitoring, anomaly detection, and non-AI controls to secure AI systems.
- Translate research findings into practical engineering requirements and collaborate closely with product and engineering teams to implement fixes and mitigations.
- Stay current with the OWASP Top 10 for LLM / GenAI and other emerging AI security standards, frameworks, and threat models.
- Produce clear technical documentation, proof-of-concepts, and internal knowledge sharing on AI security best practices and new attack/defense techniques.
What you'll Need to Succeed
- Bachelors in Computer Science or related degree.
- Experience 2-5 years in offensive cybersecurity.
- Good communication skills in English to communicate security risks to other teams.
- Deep understanding of LLMs and Generative AI (architectures, prompt processing, context windows, system prompts, tool use, fine-tuning, RAG, etc.).
- Hands-on experience with jailbreaking and red-teaming chatbots and AI agents (e.g., prompt injection, role confusion, data leakage, safety bypasses).
- Strong offensive security background: Experience with API security testing and manipulation. Prior red teaming, penetration testing, or adversarial testing experience. Bug bounty / HackerOne or similar track record is a strong plus.
- Scripting knowledge (Python, PowerShell) and working with no-code flows for automation.
Key skills/competency
- LLM security
- Generative AI
- Prompt injection
- Red teaming
- API security
- Offensive security
- Python scripting
- Guardrail design
- Adversarial testing
- OWASP Top 10 for LLM
How to Get Hired at Agoda
- Research Agoda's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor, especially their affiliation with Booking Holdings.
- Tailor your resume: Highlight specific experience in LLM security, offensive cybersecurity, red teaming, and Python scripting for AI automation.
- Showcase your expertise: Provide concrete examples of jailbreaking, prompt injection, or bug bounty successes in your portfolio or cover letter.
- Prepare for technical deep-dives: Brush up on LLM architectures, prompt engineering, API security testing, and adversarial attack techniques.
- Demonstrate communication skills: Practice articulating complex security risks and proposed mitigations clearly to both technical and non-technical audiences.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background