We are looking for an Operational Ethics & Safety Manager to join our Responsible Development & Innovation (ReDI) team at Google DeepMind.
In this role, you will be responsible for partnering with research and product teams and across the broad Responsibility team to consider the potential downstream impacts of Google DeepMind’s research and its applications.
You will work with technical, analytical and operational teams at Google DeepMind to ensure that our work is conducted in line with ethics and safety best practices, helping Google DeepMind to progress towards its mission. You will build on your technical knowledge of AI models and Science to review the safety performance of AI models and provide analysis and advice to various Google DeepMind stakeholders, including our Responsibility and Safety Council.
About usArtificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
The roleAs an Operational Ethics & Safety Manager within the ReDI team, you’ll use your expertise on the societal implications of technology, Science background and AI experience to deliver impactful assessment, advisory and review work through direct collaboration on groundbreaking research projects and to help develop the broader governance ecosystem at Google DeepMind.
Key responsibilitiesAs an Operational Ethics & Safety Manager within the ReDI team, you’ll combine your technical knowledge of AI models and your Science background with your expertise on the societal implications of technology to deliver impactful assessment, advisory and review work through direct collaboration on groundbreaking research projects and to help develop the broader governance ecosystem at Google DeepMind.
- Leading ethics and safety reviews of projects, in close collaboration with project teams, to assess the downstream societal implications of Google DeepMind’s science focused technology.
- Closely collaborating with the Google and Google DeepMind responsibility communities, including the ReDI evaluations and model policy teams, to prepare for emerging AI capabilities and AGI and to review the safety performance of our AI models.
- Supporting the Responsibility and Safety Council, presenting projects and developing and communicating assessments to senior stakeholders on a frequent basis.
- Designing engagement models to tackle ethics and safety challenges, e.g. running workshops, engaging with external experts, to help teams consider the direct and indirect implications of their work.
- Identifying areas relevant to ethics and safety to advance research.
- Working with broader Google teams to monitor the outcomes of projects to understand their impact
- Developing and documenting best practices through working with internal Google DeepMind teams and experts, and where appropriate, external organisations, to develop best practices for Google DeepMind projects.
In order to set you up for success as an Operational Ethics & Safety Manager at Google DeepMind, we look for the following skills and experience:
- Experience navigating and assessing complex ethical and societal questions related to technology development, including balancing the benefits and risks of AI-based scientific research and applications.
- Excellent technical understanding and communication ability, with the ability to distill sophisticated technical ideas to their essence. Significant experience collaborating with technical stakeholders and highly interdisciplinary teams.
- A strong understanding of the challenges and issues in the field of AI ethics and safety, through proven AI and society experience (e.g. relevant governance, policy, legal or research work).
- The ability to distill complex technology and impact analysis, both in writing and orally. Strong executive stakeholder management skills, including the ability to communicate effectively in tight turnaround times. Proven ability to communicate complex concepts and ideas simply for a range of collaborators.
- Education background, to at least undergraduate level, in Physics, Chemistry, Biology, Materials Science, and/or a related field.
In addition, the following would be an advantage:
- Experience working with AI governance processes within a public or private institution.
- Experience working within the fields of AI safety or AI ethics as it pertains to a scientific field.
- Relevant research experience.
- Product management expertise, legal analysis expertise or other similar experience.
Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy.
At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Similar Jobs
What We Do
We’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
Our long term aim is to solve intelligence, developing more general and capable problem-solving systems, known as artificial general intelligence (AGI).
Guided by safety and ethics, this invention could help society find answers to some of the world’s most pressing and fundamental scientific challenges.
We have a track record of breakthroughs in fundamental AI research, published in journals like Nature, Science, and more.Our programs have learned to diagnose eye diseases as effectively as the world’s top doctors, to save 30% of the energy used to keep data centres cool, and to predict the complex 3D shapes of proteins - which could one day transform how drugs are invented.







