Research Engineer, GenAI Post-Training for Secu... @ DeepMind
Your Application Journey
Email Hiring Manager
Job Details
Job Overview
DeepMind is seeking a Research Engineer, GenAI Post-Training for Security & Privacy to enhance our auto-red teaming capabilities and secure the GenAI models like Gemini. The role focuses on post-training improvements to make models as capable as experienced privacy and security engineers.
Key Responsibilities
- Build post-training data and tools to improve model capabilities.
- Evaluate and auto-red team GenAI models to identify vulnerabilities.
- Generalize solutions into reusable libraries and frameworks.
- Contribute to publications, open source initiatives, and education.
About You
You have a B.S./M.S. in Computer Science or a related quantitative field along with 5+ years of experience. You have demonstrable experience with training or fine-tuning generative models, and proficiency in Python. Experience with JAX, PyTorch, research publications on ML security, privacy, safety, or alignment is an advantage.
Additional Information
Any offer of employment will be conditional on a background check. DeepMind values diversity and is committed to equal employment opportunity. If accommodations are needed, please let us know.
Key skills/competency
- auto-red teaming
- Gemini
- generative models
- security
- privacy
- Python
- JAX
- PyTorch
- research
- publication
How to Get Hired at DeepMind
🎯 Tips for Getting Hired
- Research DeepMind's culture: Explore mission, values, and recent achievements.
- Customize your resume: Emphasize ML security, Python, and model training skills.
- Highlight publications: Showcase research in ML security and privacy.
- Prepare technical examples: Demonstrate generative model fine-tuning expertise.