11 days ago

Applied AI Security Engineer

Lenovo

On Site
Full Time
₪300,000
Tel Aviv District, Israel

Job Overview

Job TitleApplied AI Security Engineer
Job TypeFull Time
CategoryCommerce
Experience5 Years
DegreeMaster
Offered Salary₪300,000
LocationTel Aviv District, Israel

Who's the hiring manager?

Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Uncover Hiring Manager

Job Description

About Lenovo

We are Lenovo. We do what we say. We own what we do. We WOW our customers.

Lenovo is a US$69 billion revenue global technology powerhouse, ranked #196 in the Fortune Global 500, and serving millions of customers every day in 180 markets. Focused on a bold vision to deliver Smarter Technology for All, Lenovo has built on its success as the world’s largest PC company with a full-stack portfolio of AI-enabled, AI-ready, and AI-optimized devices (PCs, workstations, smartphones, tablets), infrastructure (server, storage, edge, high performance computing and software defined infrastructure), software, solutions, and services. Lenovo’s continued investment in world-changing innovation is building a more equitable, trustworthy, and smarter future for everyone, everywhere. Lenovo is listed on the Hong Kong stock exchange under Lenovo Group Limited (HKSE: 992) (ADR: LNVGY).

This transformation together with Lenovo’s world-changing innovation is building a more inclusive, trustworthy, and smarter future for everyone, everywhere. To find out more visit www.lenovo.com, and read about the latest news via our StoryHub.

Work Location and Policy

Please note that the team has moved to a new location. The office is now at 121 Menachem Begin Road, Tel Aviv, 61st floor, in the POINT office complex. As per our office policy, you will be required to visit the site at least three times a week.

About the Role: Applied AI Security Engineer

Lenovo Digital Trust Lab is seeking an Applied AI Security Engineer to design, build, and deploy runtime security controls for AI and agentic systems. This role focuses on protecting AI systems during inference and execution, including LLM guardrails, agent tool control, MCP gateway protections, abuse prevention, and cost/resource safeguards.

You will translate AI-security research and threat models into practical controls that operate in real time, bridging the gap between adversarial research and deployed systems.

Key Responsibilities

  • Design and implement runtime AI security controls (guardrails, filters, policy engines, gateways).
  • Build protections for LLM inference, agent tool execution, MCP / plugin frameworks, and RAG pipelines.
  • Implement prompt, input, and output inspection for abuse, jailbreaks, data leakage, and policy violations.
  • Develop resource and abuse controls (rate limiting, cost protection, Denial-of-Wallet mitigations).
  • Turn abstract threats into concrete, testable controls.
  • Integrate controls into existing AI platforms and SDKs with minimal performance impact.
  • Collaborate with AI red-teaming, model evaluation, monitoring, and product teams.
  • Contribute to threat modeling and validation of controls against real attack scenarios.

Minimum Requirements

  • 3+ years of experience as an Applied AI Engineer, Software Engineer, or ML Engineer working on production AI systems.
  • Strong experience with Python and building backend or middleware services.
  • Hands-on experience working with LLM inference and agentic AI systems, including tool calling, orchestration layers, or multi-step reasoning workflows.
  • Understanding of AI threat vectors (prompt injection, jailbreaks, data leakage, tool abuse).
  • Familiarity with runtime control concepts such as policy enforcement, validation, rate limiting, or access control.

Preferred Requirements

  • Experience building or securing agentic AI frameworks, including tool execution, plugin systems, or MCP-like protocols.
  • Hands-on experience implementing LLM guardrails, input/output inspection, or policy-based enforcement at inference time.
  • Familiarity with RAG pipelines, including retrieval filtering and response validation.
  • Experience designing protections against agent misuse and abuse, including prompt injection, tool abuse, and excessive compute usage.
  • Knowledge of cost and resource management in AI systems (token budgets, rate limiting, Denial-of-Wallet prevention).
  • Background in AI security, application security, or abuse prevention is a strong plus, but not mandatory.

What We Offer

  • Health Disability Insurance
  • Pension/ Retirement Plan
  • Meal Vouchers
  • Employee Referral Bonus
  • Children of Lenovo Employees Scholarship Program
  • Lenovo and Motorola Product Discounts
  • Employee Assistance Program, e.g., for health, legal financial consultancy
  • Internal E-learning Development Platform Available for Employees

We are an Equal Opportunity Employer and do not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, national origin, status as a veteran, and basis of disability or any federal, state, or local protected class.

Key skills/competency

  • AI Security
  • Runtime Controls
  • LLM Guardrails
  • Agentic Systems
  • Python Programming
  • Backend Development
  • Threat Modeling
  • Prompt Injection
  • Data Leakage Prevention
  • Policy Enforcement

Tags:

Applied AI Security Engineer
AI security
runtime controls
LLM guardrails
agent systems
prompt injection
data leakage
abuse prevention
policy enforcement
threat modeling
RAG pipelines
Python
LLM inference
agentic AI
backend services
middleware
orchestration layers
plugin frameworks
API security
cloud security

Share Job:

How to Get Hired at Lenovo

  • Research Lenovo's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor, especially their commitment to Smarter Technology for All and AI innovation.
  • Tailor your resume: Highlight specific experience in AI security, Python development, LLM inference, agentic AI systems, and backend/middleware services to align with Lenovo's requirements.
  • Showcase practical experience: Emphasize your ability to translate AI security research into deployed runtime controls and your hands-on work with production AI systems.
  • Prepare for technical deep-dives: Be ready to discuss AI threat vectors, runtime control concepts, and your experience with specific mitigations like prompt injection and data leakage prevention.
  • Demonstrate collaboration: Illustrate how you've worked effectively with cross-functional teams, including red-teaming, model evaluation, and product teams, on security initiatives.

Frequently Asked Questions

Find answers to common questions about this job opportunity

Explore similar opportunities that match your background