AI Ethics and Safety Policy Researcher
Google DeepMind
Job Overview
Who's the hiring manager?
Sign up to PitchMeAI to discover the hiring manager's details for this job. We will also write them an intro email for you.

Job Description
AI Ethics and Safety Policy Researcher at Google DeepMind
We are looking for an AI Ethics and Safety Policy Researcher to join our Responsible Development & Innovation (ReDI) team at Google DeepMind (GDM). In this role, you will be responsible for proactively identifying, researching, and addressing emerging AI ethics and safety challenges. Such risks relate to new AI capabilities and modalities, including but not limited to persuasion, social intelligence, personalisation, agentics, and robotics. You will conduct novel research and partner with internal and external experts to develop, adapt and implement practical guidelines and policies which mitigate against emerging risks. These guidelines and policies will ensure that GDM develops and deploys its technology in a way that is aligned with the company's AI Principles.
About Us
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
The Role
As an AI Ethics and Safety Policy Researcher, your focus will be identifying, deeply understanding and mitigating emerging AI risks. You should expect your outputs to take various forms, depending on the topic or need. This may include: original research papers or other publications on emerging AI ethics and safety issues, ideal model behaviour policies that inform model development and steer evaluations, guidelines for research or governance teams to follow when developing or deploying technology; and artifacts, processes, or coordination mechanisms needed to best support the creation and implementation of those guidelines and policies at GDM and beyond.
Key Responsibilities
- Systematically identify risks associated with emerging and proliferating AI capabilities
- Conduct original research on identified challenges, gathering information from a variety of sources, including external and internal experts, academic literature, and industry reports
- Design and build operational frameworks for mitigating model risks, converting them into standardized artefacts such as universal training datasets and evaluation protocols
- Collaborate with model development teams to help them adopt and apply these frameworks, guiding them in defining project-specific metrics and criteria for significant results
- Communicate findings and recommendations to stakeholders, including researchers, engineers, product managers, and executives
- Support the teams across GDM in interpreting the frameworks and ensuring that training and evaluation data as appropriate.
- Work closely with relevant across the organisation to align and update the frameworks to ensure their continued relevance in a rapidly changing environment
About You
In order to set you up for success in this role, we look for the following skills and experience:
- A PhD, or equivalent experience, in a relevant field, such as AI ethics or safety, computer science, social sciences, or public policy
- Proven expertise in AI ethics, AI policy or a related field
- Demonstrable track record of implementing policies
- Strong research and writing skills, evidenced by publications in top journal and conference proceedings
- Experience working within interdisciplinary teams
- Ability to communicate complex concepts and ideas simply for a range of collaborators
- Ability to think critically and creatively about complex ethical issues
Key Skills/Competency
- AI Ethics
- AI Safety
- Policy Development
- Risk Mitigation
- Original Research
- Operational Frameworks
- Stakeholder Communication
- Interdisciplinary Collaboration
- Critical Thinking
- Machine Learning Principles
How to Get Hired at Google DeepMind
- Research Google DeepMind's culture: Study their mission, values, recent news, and employee testimonials on LinkedIn and Glassdoor, focusing on their commitment to ethical AI and scientific discovery.
- Tailor your resume: Highlight your expertise in AI ethics, policy development, and research, customizing it to showcase achievements aligned with identifying and mitigating AI risks.
- Prepare for technical and behavioral questions: Be ready to discuss your research on emerging AI challenges, your ability to design operational frameworks, and how you communicate complex ethical concepts.
- Network strategically: Connect with current Google DeepMind employees and AI ethics professionals on LinkedIn to gain insights and potentially secure referrals.
- Demonstrate passion for responsible AI: During interviews, articulate your dedication to ensuring AI is developed and deployed safely and ethically, aligning with Google's AI Principles.
Frequently Asked Questions
Find answers to common questions about this job opportunity
Explore similar opportunities that match your background