Applied Research and Growth

Sorry, this job was removed at 09:57 p.m. (CST) on Tuesday, May 06, 2025
Be an Early Applicant
San Francisco, CA, USA
In-Office
Software • Generative AI
The Role
About Goodfire

Behind our name: Like fire, AI holds the potential for both devastating harm and immense benefit. Just as our ancestors' mastery of fire enabled them to cook food, smelt metals, and launch rockets into space, AI stands as humanity's most profound innovation since that first controlled flame. Our goal is to tame this new fire, enabling a safe transition into a post-AGI world.

Goodfire is an AI interpretability research lab focused on understanding and intentionally designing advanced AI systems. We believe that advances in interpretability will unlock the next frontier of safe and powerful foundation models.

Our flagship product is Ember - the universal platform for neural programming. Ember decodes the neurons of an AI model to give direct, programmable access to its internal representations. By moving beyond black-box inputs and outputs, Ember unlocks entirely new ways to apply, train, and align AI models — allowing users to discover new knowledge hidden in their model, precisely shape its behaviors, and improve its performance.

We’re backed by Lightspeed Venture Partners, Menlo Ventures, NFDG’s AI Grant, South Park Commons, Work-Bench, and other leading investors.

Working at Goodfire

Our team brings together AI interpretability experts and experienced startup operators from organizations like OpenAI and DeepMind, united by the belief that interpretability is essential to advancing AI development.
We're a public benefit corporation based in San Francisco. All roles are in-person, five days a week, at our Telegraph Hill office.

The role:

We are looking for our first researcher-facing growth hire to join our team. You should be a creative researcher who can build tools to showcase new capabilities unlocked by Ember, our mechanistic interpretability API. You will turn our research into accessible content, demos, and community outreach. You’ll shape how the world learns about and engages with our company’s mission, from driving researcher interest to forming new research collaborations.

Core responsibilities:

  1. Build engaging, interactive content on top of Ember and other tools to expand how people think about mechanistic interpretability (e.g. show how Ember can improve performance on jailbreak benchmarks or reduce hallucinations)
  2. Help our research team make their research more accessible and engaging
  3. Manage our social media, blog, and research outreach, making complex interpretability concepts easy to understand
  4. Engage and build our community

Who you are:

Goodfire is looking for experienced individuals who embody our values and share our deep commitment to making interpretability accessible. We care deeply about building a team who shares our values:

Put mission and team first
All we do is in service of our mission. We trust each other, deeply care about the success of the organization, and choose to put our team above ourselves.

Improve constantly
We are constantly looking to improve every piece of the business. We proactively critique ourselves and others in a kind and thoughtful way that translates to practical improvements in the organization. We are pragmatic and consistently implement the obvious fixes that work.

Take ownership and initiative
There are no bystanders here. We proactively identify problems and take full responsibility over getting a strong result. We are self-driven, own our mistakes, and feel deep responsibility over what we’re building.

Action today
We have a small amount of time to do something incredibly hard and meaningful. The pace and intensity of the organization is high. If we can take action today or tomorrow, we will choose to do it today.

 

If you share our values and have at least two years of relevant experience, we encourage you to apply and join us in shaping the future of how we design AI systems.

What we are looking for:

  • Strong understanding of the field of mechanistic interpretability
  • Incredibly creative; pushes forward boundaries of tools and technology
  • Has published AI research
  • Excellent writer
  • Can communicate complex concepts simply

Preferred qualifications:

  • Experience working in a fast-paced, early-stage startup environment
  • Good understanding of vibes on X
  • Experience with interpretability techniques and tooling for AI models
  • Experience working with open source AI models
  • Contributions to open-source ML projects or libraries

This role offers market competitive salary, equity, and competitive benefits. More importantly, you'll have the opportunity to work on groundbreaking technology with a world-class team dedicated to ensuring a safe and beneficial future for humanity.

The expected salary range for this position is $140,000 - $250,000 USD.

This role reports to our CEO.

Similar Jobs

CoreWeave Logo CoreWeave

Account Manager

Cloud • Information Technology • Machine Learning
In-Office
2 Locations
1450 Employees
165K-189K Annually

BAE Systems, Inc. Logo BAE Systems, Inc.

Devops Engineer

Aerospace • Hardware • Information Technology • Security • Software • Cybersecurity • Defense
Hybrid
San Diego, CA, USA
40000 Employees
79K-135K Annually

BAE Systems, Inc. Logo BAE Systems, Inc.

Site Reliability Engineer

Aerospace • Hardware • Information Technology • Security • Software • Cybersecurity • Defense
Hybrid
San Diego, CA, USA
40000 Employees
133K-226K Annually

Tapestry - Coach and Kate Spade Logo Tapestry - Coach and Kate Spade

Supervisor II

eCommerce • Fashion • Other • Retail • Sales • Wearables • Design
Hybrid
Cabazon, CA, USA
16000 Employees
50K-70K Annually
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: San Francisco, CA
9 Employees
Year Founded: 2024

What We Do

Our mission is to advance humanity's understanding of AI by examining the inner workings of advanced AI models (or “AI Interpretability”). As a research-driven product organization, we bridge the gap between theoretical science and practical applications of interpretability. We're building critical infrastructure that empowers developers to understand, edit, and debug AI models at scale, ensuring the creation of safer and more reliable systems. Goodfire is a public benefit corporation headquartered in San Francisco.

Similar Companies Hiring

Fairly Even Thumbnail
Software • Sales • Robotics • Other • Hospitality • Hardware
New York, NY
Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees
Kepler  Thumbnail
Fintech • Software
New York, New York
6 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account