Security Engineer, AI Security
Electronic Arts (EA)
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
About Electronic Arts
Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.
Security Engineer, AI Security at Electronic Arts
EA Security is seeking an offensive-minded Security Engineer, AI Security to help secure AI-enabled systems, agents, and LLM-integrated workflows across EA’s games, services, and enterprise platforms. This role focuses on identifying real-world security risks in both commercial and internally developed AI platforms, and on building scalable testing, automation, and AI-driven security agents that extend the team’s impact.
You will work closely with Application Security and Red Team engineers, applying an attacker’s mindset to AI systems while building scalable security testing, automation, and guardrails that meaningfully reduce risk. This role is hands-on, technical, and impact-driven, with an emphasis on practical exploitation, adversarial testing, and scalable security outcomes.
This role is ideal for security engineers who enjoy breaking complex systems, reasoning about abuse paths, and turning deep technical findings into scalable and durable AI security improvements.
This position reports into the Application Security and Red Teaming organization.
Responsibilities
- Perform security testing and reviews of AI-enabled applications, agents, and workflows, including architecture, design, and implementation analysis.
- Identify and validate vulnerabilities in LLM-based systems such as data leakage, insecure tool use, authentication gaps, and abuse paths.
- Evaluate AI systems for prompt injection (direct, indirect, conditional, and persistent), including risks introduced through retrieval-augmented generation and agentic workflows.
- Conduct adversarial testing of commercial AI platforms such as Microsoft Copilot, Google AgentSpace, and OpenAI ChatGPT, as well as internally developed AI systems.
- Assess agentic and multi-agent workflows for privilege escalation, unsafe action chaining, cross-agent abuse, and unintended side effects.
- Design, build, and operate AI-driven security agents and automation, including multi-agent workflows, that scale application security, red teaming, and AI security efforts.
- Develop tooling, test harnesses, and repeatable validation frameworks to expand AI security coverage across teams.
- Partner with application engineers to translate findings into actionable mitigations, secure design patterns, and engineering guidance.
- Collaborate with Red Team and AppSec engineers to integrate AI attack techniques and agent-based testing into broader offensive security activities.
- Contribute reusable insights, documentation, and guardrails that help teams adopt AI securely and reduce future systemic risk.
Required Qualifications
- Strong background in application security, offensive security, or a combination of both.
- Hands-on experience identifying and exploiting security weaknesses in modern applications and services.
- Experience testing or securing AI-enabled systems, LLM integrations, or agent-based workflows.
- Ability to reason about attacker misuse, abuse scenarios, and emergent behavior beyond traditional vulnerability classes.
- Experience building automation, tooling, or security agents using languages such as Python, Go, JavaScript, or similar.
- Familiarity with source code review and security tooling such as CodeQL, Semgrep, or equivalent.
- Strong collaboration and communication skills, with the ability to work directly with engineers and security partners.
Preferred Qualifications
- Experience assessing commercial AI platforms or enterprise AI services.
- Familiarity with agent orchestration, tool calling, function execution, or multi-agent systems.
- Experience with traditional red team tooling or adversary simulation techniques.
- Exposure to detection engineering, incident response, or threat intelligence workflows.
- Experience turning novel AI security findings into scalable guidance rather than one-off fixes.
Compensation and Benefits
The ranges listed below are what EA in good faith expects to pay applicants for this role in these locations at the time of this posting. If you reside in a different location, a recruiter will advise on the applicable range and benefits. Pay offered will be determined based on a number of relevant business and candidate factors (e.g. education, qualifications, certifications, experience, skills, geographic location, or business needs).
PAY RANGES
- British Columbia (depending on location e.g. Vancouver vs. Victoria) *$91,100 - $126,900 CAD
- California (depending on location e.g. Los Angeles vs. San Francisco) *$101,700 - $151,900 USD
- Washington (depending on location e.g. Seattle vs. Spokane) *$96,400 - $126,400 USD
Pay is just one part of the overall compensation at EA.
In the US, we offer a package of benefits including paid time off (3 weeks per year to start), 80 hours per year of sick time, 16 paid company holidays per year, 10 weeks paid time off to bond with baby, medical/dental/vision insurance, life insurance, disability insurance, and 401(k) to regular full-time employees. Certain roles may also be eligible for bonus and equity.
For British Columbia, we offer a package of benefits including vacation (3 weeks per year to start), 10 days per year of sick time, paid top-up to EI/QPIP benefits up to 100% of base salary when you welcome a new child (12 weeks for maternity, and 4 weeks for parental/adoption leave), extended health/dental/vision coverage, life insurance, disability insurance, retirement plan to regular full-time employees. Certain roles may also be eligible for bonus and equity.
Key skills/competency
- AI Security
- LLM Vulnerabilities
- Prompt Injection
- Adversarial Testing
- Application Security
- Offensive Security
- Automation Development
- Red Teaming
- Risk Reduction
- Secure Design Patterns
How to Get Hired at Electronic Arts (EA)
- Understand Electronic Arts's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor.
- Tailor your resume strategically: Highlight AI security, offensive skills, and relevant gaming industry experience.
- Showcase technical acumen: Prepare to discuss system exploitation, adversarial testing, and AI-specific vulnerabilities.
- Network effectively: Connect with current Electronic Arts employees on LinkedIn for insider perspectives and advice.
- Prepare for behavioral questions: Demonstrate strong collaboration, problem-solving, and adaptability in fast-paced environments.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background