Careers
Careers

job details

Back to jobs search

Jobs search results

3,907 jobs matched
Showing 21 to 40 of 3907 rows
Back to jobs search

Research Engineer, Frontier Safety Mitigations, DeepMind

DeepMindLondon, UK

Minimum qualifications:

  • Bachelor’s degree or equivalent practical experience.
  • 5 years of experience with software development in one or more programming languages.
  • 3 years of experience testing, maintaining, or launching software products, and 1 year of experience with software design and architecture.

Preferred qualifications:

  • PhD in Computer Science, Machine Learning, or equivalent practical experience, or publications at venues (e.g., NeurIPS, ICLR, ICML, or EMNLP).
  • Experience with cybersecurity detection and response, building classifiers and anomaly detection systems at scale, taking safety defenses or mitigations from research concepts to scalable production systems.
  • Experience in adversarial machine learning, automated red-teaming, or model interpretability and probes.
  • Experience collaborating on or leading applied ML projects, including LLM training, inference, and fine-tuning.
  • Experience using AI coding agents with strong architectural judgment and with TPUs and JAX.
  • Knowledge of AI control, chain-of-thought monitoring, monitorability, and related frontier safety research.

About the job

In this role, you will de-risk model launches by defending against misuse domains (e.g., Cybersecurity, Chemical, Biological, Radiological, Nuclear, and Conventional Explosive [CBRNE], and Harmful Manipulation). You will build evaluations, conduct red-teaming, research and deploy mitigations (both in-model and out-of-model), and monitor emerging risks to enable the beneficial use of technology.

DeepMind is a dedicated scientific community, committed to ‘solving intelligence’ and ensuring technology is used for widespread public benefit. The Frontier Safety Mitigation team operates in a collaborative environment with a culture of support, dedication, and teamwork. The team takes the possibility of dangerous model capabilities seriously as AI advances. Proactively researching and implementing defense-in-depth mitigations is a critical part of the overall strategy for building safe AI.

You will join the Frontier Safety Mitigation team within the Gemini Safety team to build safety mitigations for frontier models. You will focus on building defenses against risks, contributing to DeepMind's Frontier Safety Framework commitments.

Artificial intelligence will be one of humanity’s most transformative inventions. At Google DeepMind, we are a pioneering AI lab with exceptional interdisciplinary teams focused on advancing AI development to solve complex global challenges and accelerate high-quality product innovation for billions of users. We use our technologies for widespread public benefit and scientific discovery, ensuring safety and ethics are always our highest priority.


We are pushing the boundaries across multiple domains. Our global teams offer diverse learning opportunities and varied career pathways for those driven to achieve exceptional results through collective effort.

Responsibilities

  • Build advanced classifiers and data pipelines to detect misuse, owning the end-to-end process from automated evaluation to rapid model iteration.
  • Build cross-context monitoring systems to detect coordinated harms, developing novel signal aggregation methods across disparate user sessions to identify large-scale attack vectors.
  • Implement data-driven, semi-automated account-level response systems to detect, track, and apply strikes against persistent malicious actors using rich signals from production traffic.
  • Evaluate and secure agentic AI systems by developing threat models, creating testing environments, and deploying robust mitigations against frontier-level agentic hacking and long-horizon attacks.
  • Be able to advance research in automated red-teaming and adversarial robustness, leveraging multi-turn/agentic attacks to systematically test for and uncover misuse vulnerabilities.

Information collected and processed as part of your Google Careers profile, and any job applications you choose to submit is subject to Google's Applicant and Candidate Privacy Policy.

Google is proud to be an equal opportunity and affirmative action employer. We are committed to building a workforce that is representative of the users we serve, creating a culture of belonging, and providing an equal employment opportunity regardless of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), expecting or parents-to-be, criminal histories consistent with legal requirements, or any other basis protected by law. See also Google's EEO Policy, Know your rights: workplace discrimination is illegal, Belonging at Google, and How we hire.

If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Google is a global company and, in order to facilitate efficient collaboration and communication globally, English proficiency is a requirement for all roles unless stated otherwise in the job posting.

To all recruitment agencies: Google does not accept agency resumes. Please do not forward resumes to our jobs alias, Google employees, or any other organization location. Google is not responsible for any fees related to unsolicited resumes.

Google apps
Main menu