Goodfire is a research lab advancing the field of AI interpretability.
Our mission is to solve the technical alignment problem for powerful AI systems. We believe that major advances in mechanistic interpretability - understanding how AI systems work internally - are key to solving this challenge.
Goodfire is a public benefit corporation headquartered in San Francisco.
The role:
We are looking for a Research Engineer to join our team and help develop robust, scalable systems for deploying interpretability techniques on large AI models. You will collaborate closely with our research team to translate novel interpretability methods into production-ready tools and work on scaling our infrastructure to handle increasingly large models and complex use cases.
Core responsibilities:
- Design and implement scalable, efficient systems for applying interpretability techniques to large language models in production environments
- Collaborate with our research team to develop robust pipelines for extracting, processing, and visualizing the internal representations and decision-making processes of AI models
- Optimize and parallelize interpretability algorithms to handle increasingly large and complex models
- Build and maintain the infrastructure for training, deploying, and monitoring interpretability-enabled AI models
- Work closely with our product team to ensure seamless integration of interpretability features into our core product
Who you are:
Goodfire is looking for experienced individuals who embody our values and share our deep commitment to making interpretability accessible. We care deeply about building a team who shares our values:
Put mission and team first
All we do is in service of our mission. We trust each other, deeply care about the success of the organization, and choose to put our team above ourselves.
Improve constantly
We are constantly looking to improve every piece of the business. We proactively critique ourselves and others in a kind and thoughtful way that translates to practical improvements in the organization. We are pragmatic and consistently implement the obvious fixes that work.
Take ownership and initiative
There are no bystanders here. We proactively identify problems and take full responsibility over getting a strong result. We are self-driven, own our mistakes, and feel deep responsibility over what we’re building.
Action today
We have a small amount of time to do something incredibly hard and meaningful. The pace and intensity of the organization is high. If we can take action today or tomorrow, we will choose to do it today.
If you share our values and have at least two years of relevant experience, we encourage you to apply and join us in shaping the future of how we design AI systems.
What we are looking for:
- At least 5 years of experience in research engineering, machine learning engineering or a related field
- Expertise in Python and deep learning frameworks such as PyTorch or TensorFlow
- Experience with scaling and deploying large-scale machine learning models in production environments
- Familiarity with distributed computing frameworks and cloud platforms
- Experience writing scalable, well-designed, software
- Passion for AI interpretability and a commitment to responsible AI development
Preferred qualifications:
- Experience working in a fast-paced, early-stage startup environment
- Experience with interpretability techniques and tooling for AI models
- Experience working with open source AI models
- Contributions to open-source ML projects or libraries
This role offers market competitive salary, equity, and competitive benefits. More importantly, you'll have the opportunity to work on groundbreaking technology with a world-class team dedicated to ensuring a safe and beneficial future for humanity.
This role reports to our CTO.
Top Skills
What We Do
Our mission is to advance humanity's understanding of AI by examining the inner workings of advanced AI models (or “AI Interpretability”). As a research-driven product organization, we bridge the gap between theoretical science and practical applications of interpretability.
We're building critical infrastructure that empowers developers to understand, edit, and debug AI models at scale, ensuring the creation of safer and more reliable systems.
Goodfire is a public benefit corporation headquartered in San Francisco.