2 days ago

Research Engineer, Agentic Safety

Google DeepMind

On Site
Full Time
$200,000
Mountain View, CA

Job Overview

Job TitleResearch Engineer, Agentic Safety
Job TypeFull Time
CategoryCommerce
Experience5 Years
DegreeMaster
Offered Salary$200,000
LocationMountain View, CA

Who's the hiring manager?

Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Uncover Hiring Manager

Job Description

Research Engineer, Agentic Safety at Google DeepMind

Accelerate research in strategic projects that enable trustworthy, robust and reliable agentic systems with a group of research scientists and engineers on a mission-driven team. Together, you will apply ML and other computational techniques to a wide range of challenging problems.

About Google DeepMind

Google DeepMind is a dedicated scientific community, committed to “solving intelligence” and ensuring our technology is used for widespread public benefit. We’ve built a supportive and inclusive environment where collaboration is encouraged and learning is shared freely. We don’t set limits based on what others think is possible or impossible. We drive ourselves and inspire each other to push boundaries and achieve ambitious goals.

The Role

As a Research Engineer, Agentic Safety, you will use your AI and software engineering expertise to collaborate with domain experts and other machine learning scientists within our strategic initiatives programs. Your primary focus will be on building technologies to make AI agents safer. AI agents are increasingly used in sensitive contexts with powerful capabilities, having abilities to access personal data, confidential enterprise data and code, interact with third party applications or websites, or write and execute code in order to fulfil user tasks. Ensuring that such agents are reliable, secure and trustworthy is a large scientific and engineering challenge, with huge potential impact. In this role, you will serve this mission by building infrastructure, researching new approaches to agentic safety, building prototypes and demos, working with partner and client teams, and most importantly, land transformative impact for GDM, our product partners, and the AI ecosystem more broadly.

Key Responsibilities

  • Develop frameworks to evaluate the safety, security and privacy of agentic AI systems at scale across key use cases at Google and GDM.
  • Work on agent orchestration prototypes combining multiple AI components to reliably solve complex tasks in nuanced scenarios.
  • Build leaderboards and evaluation metrics for the project to hill-climb.
  • Integrate novel agentic technologies into research prototypes.
  • Work with product teams to gather research requirements and consult on the deployment of research-based solutions to help deliver value incrementally.
  • Amplify the impact by generalizing solutions into reusable libraries and frameworks for privacy preserving AI agents across Google, and by sharing knowledge through design docs, open source, or external blog posts.

About You

In order to set you up for success as a Research Engineer at Google DeepMind, we look for the following skills and experience:

  • Bachelor in computer science, security or related field, or equivalent practical experience.
  • Passion for accelerating the development of secure agents using innovative technologies.
  • Strong programming experience.
  • Demonstrated record of python implementations of LLM pipelines.
  • Quantitative skills in maths and statistics.
  • Experience with common scripting languages and pipelining tools.

Additional Advantageous Skills

  • Experience in applying machine learning techniques to problems surrounding scalable, robust and trustworthy deployments of models.
  • Experience with GenAI language models, programming languages, compilers, formal methods, and/or private storage solutions.
  • Demonstrated success in creative problem solving for scalable teams and systems.
  • A real passion for AI!

Key skills/competency

  • Agentic AI Systems
  • Machine Learning
  • AI Safety
  • Python Programming
  • LLM Pipelines
  • Software Engineering
  • GenAI Language Models
  • Security & Privacy
  • Evaluation Frameworks
  • Prototype Development

Tags:

Research Engineer
AI safety
agentic systems
machine learning
software engineering
research
prototyping
evaluation
security
privacy
trustworthy AI
Python
LLM pipelines
GenAI
scripting
formal methods
compilers
private storage
data pipelines
computational techniques

Share Job:

How to Get Hired at Google DeepMind

  • Research Google DeepMind's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor.
  • Tailor your resume for Agentic Safety: Highlight experience in AI safety, ML, Python, and agentic systems, aligning with Google DeepMind's needs.
  • Master technical fundamentals: Prepare for coding challenges and in-depth discussions on ML, AI agents, and secure system design.
  • Showcase problem-solving skills: Be ready to discuss creative solutions to complex AI safety or software engineering challenges with Google DeepMind.
  • Network strategically: Connect with Google DeepMind employees on LinkedIn; attend relevant AI conferences or workshops.

Frequently Asked Questions

Find answers to common questions about this job opportunity

Explore similar opportunities that match your background