Advisory Researcher, ML Engineer
Lenovo
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
About Lenovo
Lenovo is a US$69 billion revenue global technology powerhouse, ranked #196 in the Fortune Global 500, serving millions of customers daily across 180 markets. With a bold vision to deliver Smarter Technology for All, Lenovo has built on its success as the world’s largest PC company with a comprehensive portfolio of AI-enabled, AI-ready, and AI-optimized devices, infrastructure, software, solutions, and services. Lenovo's continuous investment in world-changing innovation is fostering a more equitable, trustworthy, and smarter future globally. Listed on the Hong Kong stock exchange (HKSE: 992) (ADR: LNVGY), Lenovo is committed to transformation and innovation. Learn more at www.lenovo.com and through their StoryHub.
Please note: The team's office is located at 121 Menachem Begin Road, Tel Aviv, 61st floor, in the POINT office complex. A hybrid work model is in place, requiring site visits at least three times a week.
The Opportunity: Advisory Researcher, ML Engineer
Lenovo Digital Trust Lab is actively seeking a hands-on Advisory Researcher, ML Engineer to join its Hybrid AI Security team. This pivotal role focuses on model build and fine-tuning workflows, collaborating closely with security researchers to integrate robust security and Responsible AI capabilities throughout the entire training lifecycle.
You will operate at the critical intersection of machine learning, security, and platform engineering. Your primary responsibility will be to develop innovative tooling, robust pipelines, and essential controls. These efforts are crucial for ensuring that models are trained responsibly, securely, and in strict adherence to Lenovo's core Trust, Privacy, and Responsible AI principles.
Key Responsibilities
- Design and maintain sophisticated ML training and fine-tuning pipelines specifically for LLMs and other ML models.
- Collaborate intimately with security researchers to embed security-driven requirements into data preparation, training, and model build workflows.
- Implement crucial ML-side capabilities necessary to support comprehensive security controls, including data filtering hooks, intelligent dataset labeling, and training instrumentation.
- Develop and deploy effective controls to proactively detect and mitigate training-time threats such as data poisoning, backdoor attacks, contamination, and data leakage.
- Engineer advanced fine-tuning workflows (e.g., LoRA/PEFT) ensuring paramount auditability, reproducibility, and stringent policy enforcement.
- Integrate cutting-edge model evaluation and testing frameworks, with a strong emphasis on security, robustness, and safety, directly into the model build phase.
Minimum Requirements
- A minimum of 3 years of hands-on experience as an ML Engineer or Applied ML Engineer.
- Proficiency in Python and strong command of leading ML frameworks such as PyTorch, TensorFlow, or JAX.
- Practical, hands-on experience in training or fine-tuning ML models, including Large Language Models (LLMs).
- A solid and comprehensive understanding of ML pipelines, encompassing data ingestion, preprocessing, training, and evaluation stages.
- Familiarity with foundational security and Responsible AI concepts, including data integrity, bias detection, and model robustness.
Preferred Requirements
- Demonstrated experience with advanced LLM fine-tuning techniques, including LoRA, QLoRA, PEFT, and various adapter methods.
- Familiarity with prominent fine-tuning frameworks and tooling like Hugging Face, LLaMA Factory, and Unsloth.
- Knowledge of critical training-time attack vectors, such as data poisoning, backdoor attacks, and distribution shift scenarios.
- Experience in implementing data scanning, validation, or quality checks within ML pipelines.
- Exposure to secure ML build practices, including experiment tracking, versioning, and lineage management.
- Experience integrating evaluation tools (focused on security, safety, or Responsible AI) into CI/CD or ML pipelines.
- Familiarity with governance or compliance-driven ML workflows.
What Lenovo Offers
- Significant opportunities for career advancement and continuous personal development.
- Access to a diverse and extensive range of professional training programs.
- Performance-based rewards that recognize and celebrate individual and team achievements.
- Work-life balance with a flexible hybrid work model (3 days in office, 2 days remote).
Key skills/competency
- Machine Learning Engineering
- LLM Fine-tuning
- Responsible AI
- Data Security
- ML Pipelines
- Python
- PyTorch/TensorFlow/JAX
- Model Evaluation
- Security Controls
- Data Integrity
How to Get Hired at Lenovo
- Research Lenovo's culture: Study their mission ("Smarter Technology for All"), values, recent AI innovations, and employee testimonials on LinkedIn and Glassdoor to align your application.
- Tailor your resume for ML security: Customize your resume to highlight experience in ML model development, fine-tuning, data integrity, and responsible AI principles relevant to the Advisory Researcher, ML Engineer role.
- Showcase your technical prowess: Prepare to demonstrate strong Python, PyTorch/TensorFlow/JAX, and LLM fine-tuning skills, emphasizing experience with secure ML build practices during technical interviews at Lenovo.
- Articulate your impact on AI security: Be ready to discuss specific examples of how you've designed secure ML pipelines, mitigated training-time threats, or integrated robust evaluation frameworks, demonstrating your value to Lenovo's Digital Trust Lab.
- Network and express genuine interest: Connect with current Lenovo employees, particularly within the Digital Trust Lab, on LinkedIn to gain insights and show proactive engagement for this Advisory Researcher, ML Engineer position.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background