Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
About UsAs part of the Security & Privacy Research Team at Google DeepMind, you will play a key role in researching, designing, and building the technology that will help us work with and control advanced AI systems.
The RoleAgents are increasingly used to handle long-horizon tasks. We already see this in coding, where advances are moving us from simple code autocomplete to advanced AI systems that write code, build, test, and debug all on their own. As we get closer to AGI, we need guarantees that help control and guide these agents, especially in scenarios where the agent’s capabilities may exceed those of the systems tasked with monitoring it. You will:
- Go beyond traditional security assumptions to model how a highly capable agent could misuse its access, exfiltrate data, or establish a rogue deployment within complex production environments.
- Develop techniques for monitoring advanced agents. How can we detect emergent deception, collusion, or obscured long-term plans before they result in harmful actions?
- Create novel evaluation methodologies and benchmarks to measure the effectiveness of different control strategies against highly capable simulated adversaries.
- Identify and formalize unsolved research problems in AI control, focusing on the unique challenges posed by agents that may exceed human oversight capabilities.
- Design, prototype, and evaluate novel control systems and monitoring techniques. This includes theoretical work, large-scale experiments, and building proof-of-concept systems.
- Collaborate closely with teams working on Gemini and agent infrastructure to understand emergent risks and integrate control mechanisms directly into the systems where they are most needed.
- Publish groundbreaking research and contribute to the broader academic and policy conversation on long-term AI safety and control.
We are looking for a creative and rigorous researcher who is passionate about tackling the foundational safety challenges of advanced AI.
In order to set you up for success as a Research Scientist at Google DeepMind, we look for the following skills and experience:
- Ph.D. in Computer Science or related quantitative field, or B.S./M.S. in Computer Science or related quantitative field with 5+ years of relevant experience.
- Demonstrated research or product expertise in AI safety, AI alignment, or a related security field.
In addition, the following would be an advantage:
- Experience with adversarial research, red-teaming, or vulnerability research, particularly in complex software or AI/ML systems.
- Strong software engineering skills and experience with ML frameworks like JAX, PyTorch, or TensorFlow.
- Familiarity with concepts from game theory or mechanism design problems as they apply to AI.
- A track record of landing research impact within multi-team collaborative environments.
The US base salary range for this full-time position is between $166,000 - $244,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.
At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Top Skills
What We Do
We’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
Our long term aim is to solve intelligence, developing more general and capable problem-solving systems, known as artificial general intelligence (AGI).
Guided by safety and ethics, this invention could help society find answers to some of the world’s most pressing and fundamental scientific challenges.
We have a track record of breakthroughs in fundamental AI research, published in journals like Nature, Science, and more.Our programs have learned to diagnose eye diseases as effectively as the world’s top doctors, to save 30% of the energy used to keep data centres cool, and to predict the complex 3D shapes of proteins - which could one day transform how drugs are invented.







