Research Manager, AI Safety

Posted 2 Days Ago
Be an Early Applicant
Cambridge, MA, USA
In-Office
100K-145K Annually
Senior level
Artificial Intelligence • Social Impact
The Role
The Research Manager, AI Safety will oversee research fellows, facilitate project execution, provide feedback, and manage partnerships related to AI safety initiatives.
Summary Generated by Built In

We’re open to hiring for these roles at different levels. For the right fit, we are open to this role being structured as a Senior Research Manager, and compensation will be commensurate with experience and expected scope, which could be above the pay rate listed here.

About Cambridge Boston Alignment Initiative

The Cambridge Boston Alignment Initiative (CBAI) is a nonprofit research organization working to advance research and education directed towards ensuring that society navigates a safe and beneficial transition to advanced AI systems. Our work takes the form of producing original research efforts and accelerating AI safety research through fellowship programs.

Our inaugural summer fellowship cohort has already published a spotlight paper at the Mechanistic Interpretability Workshop at NeurIPS, accepted papers at ICLR, and some of our fellows have joined Goodfire and Redwood Research. After a successful 2025 launch, we're rapidly scaling in 2026. We will host multiple fellowship cycles (Fall, Spring, and Summer), double the fellowship cohort, and quadruple our team.

Refer us candidates, and receive $5,000 if we hire them.

The Role

You'll work closely with research fellows and their mentors — renowned researchers from Cambridge and beyond — to support cutting-edge work on interpretability, AI control, formal verification for provably safe AI, evaluations, and various topics in AI governance & policy. We're interested in hiring research managers with technical research experience and governance & policy research experience.

Research Management (0.7 FTE)
  • Conduct frequent 1-1s with your fellows, providing feedback on research progress and helping them overcome obstacles, coaching them through challenges such as but not limited to debugging the research and preparing literature scaffolds, and supporting data collection & analysis, as well as methodology development for the experiments & hypothesis tests

  • Provide feedback on your fellows' research and help CBAI create an environment that nudges your fellows to be more rigorous in their approach

  • Connect fellows with resources, literature, and opportunities they can explore during and after the fellowship program

  • Communicate with your fellows' mentors to define clear research objectives and support your fellows’ research progression

  • Contribute to the fellow selection process by reviewing and interviewing candidates to ensure the cohort consists of individuals with strong epistemic standards

Program Management & Development (0.3 FTE)
  • Design reading group curriculum components and workshop programs

  • Curate a speaker event series based on fellow profiles and recent important studies

  • Support special projects aligned with your strengths (e.g., applicant selection, evaluation frameworks, mentor onboarding)

  • Meet weekly with program leadership to enhance feedback loops and continuously improve the program

  • Stay current on technical AI alignment or governance developments relevant to your fellows' work

  • Prepare weekly briefs on the most recent developments in the field for fellows

About You

We expect you to be characterized by most of the qualities listed below.

Experience supporting complex intellectual work. You have helped others execute complex analytical or research projects. This might come from teaching, managing technical teams, conducting your own research, consulting, coordinating academic programs, or providing substantive feedback on others' work.

You understand the research process from the inside. You've done substantial analytical work yourself (whether in academia, policy analysis, consulting, or industry research). This means you can recognize when a research plan is solid vs. hand-wavy, identify blockers in someone's thinking, and suggest concrete next steps.

You're skilled at developmental feedback. You provide constructive feedback that advances the work, rather than simply identifying problems. You can help people strengthen their arguments, tighten their methodology, and present their findings more clearly, even when the specific topic is outside your core expertise.

Excellent communicator. You explain complex concepts clearly and give constructive feedback effectively. You check in proactively when you're unsure about something or notice a potential problem.

Interpersonally excellent. You have genuine empathy, active listening skills, and a servant leadership approach. You enjoy helping others succeed and take pride in their accomplishments. You're skilled at managing stakeholders and coordinating between multiple parties (researchers, advisors, administrators).

Mission-motivated. You are strongly aligned with CBAI's mission and are familiar with AGI safety and catastrophic AI risks. You want to contribute meaningfully to reducing AI catastrophic risks and are passionate about accomplishing as much as possible.

Organized and conscientious. You keep complex projects on track, follow through reliably, and maintain clear communication. You're receptive to feedback and continuously improve your approach based on what you learn.

Curious and adaptable. You're excited to learn about diverse research agendas and work with researchers from varied backgrounds. You actively seek out tools and approaches that can improve fellowship effectiveness.

The ideal candidate for this position will possess most of the qualities described above and will have a bachelor's degree or higher in Computer Science, Mathematics, Statistics, Economics, Public Policy, Political Science, or a related field, with research experience demonstrating strong methodological knowledge and a genuine interest in building a career in AI safety research.

Nice to Haves
  • Previous involvement in AI safety/alignment programs or similar field-building initiatives

  • Published research in interpretability, AI control, or adjacent agendas

  • Experience managing research programs or academic initiatives

However, there is no such thing as a "perfect" candidate. If you are on the fence about applying because you are unsure whether you are qualified, we strongly encourage you to apply.

Why This Role May Not Be the Right Fit

We want to be transparent about what this position entails so you can make an informed decision about whether it's right for you:

You're supporting others' research. Your primary job is helping fellows execute their research projects, not shaping their research strategy. You'll coach fellows through challenges, provide feedback, and connect them with resources, but you won't set research directions or objectives; mentors and fellows own that. In some cases, with mentor and fellow approval, you may contribute substantially enough to merit co-authorship, but this is a facilitation role first and foremost.

Your success is measured by fellows' accomplishments. When fellows publish papers, secure positions at top labs, or advance in their careers, that's your win. If you're primarily motivated by building your own research reputation or publication record, this role won't be satisfying.

Workload varies significantly by fellowship cycle. Active fellowship periods are intense; between cycles, the pace slows considerably. If you strongly prefer steady, predictable workloads, this variability may be challenging.

If this sounds exciting to you — if you want to spend at least a year becoming excellent at supporting talented researchers, developing deep expertise in research facilitation, and gaining broad familiarity with cutting-edge AI safety agendas — then this role could be a great fit. But if you're primarily motivated by leading your own research, driving research strategy, or specializing deeply in a single technical agenda, you might want to wait for a different opportunity.

Role Details and Benefits

Team: You'll report to the Director of Programs.

Salary: $100,000 - 145,000, depending on experience level. For exceptional candidates, we are flexible on the compensation range.

We also provide:

  • 5% 403(b) match contribution

  • Comprehensive health insurance

  • Generous PTO policy

  • Meals provided during weekdays

  • Employer-paid commuter benefits

  • Reimbursement for work-related technology and/or home office expenses

  • You will also get the opportunity to closely contribute to frontier research and could be acknowledged as a co-author in some of your collaborations.

U.S. work authorization required (we accept OPT).

Location: This position is primarily based in Cambridge, MA. While we expect you to spend most of your time working in-person from our Harvard Square office (particularly during active fellowship cycles), we can offer some hybrid flexibility between fellowship cohorts for candidates with specific circumstances. In those cases, we can give you access to AI safety co-working spaces in Berkeley and NYC.

Start date: May 2026

Selection Process

We use a multi-stage process to find the right fit:

  1. Application Review: We review applications on a rolling basis and invite strong candidates to phone screens. We take the hiring process seriously, and this means your application will be reviewed in detail by a CBAI employee.

  2. Initial Phone Screen (15 minutes): A conversation with the team manager to discuss your background, understand your interest in research management and AI safety, and answer your initial questions about the role.

  3. Paid Test Task: Strong candidates from the phone screen will receive a paid test task that mirrors actual research manager responsibilities — such as providing feedback on a research proposal, designing program components, creating evaluation frameworks, or suggesting a program design change based on the feedback. You'll have a fixed amount of time to complete this.

  4. Interview: Top candidates from the test task will be invited for an interview, including some of the following topics:

    1. Discussion of your test task submission

    2. Case study of fellowship program scenarios

    3. Conversation with CBAI team members and potentially a mentor

    4. Deep dive into your approach to research management

  5. Reference Checks: For our top finalists, we'll conduct reference checks and a final conversation to ensure mutual fit. We'll discuss logistics, answer remaining questions, and clarify expectations.

  6. Offer: Selected candidates will receive an offer and detailed onboarding information.

CBAI is an Equal Opportunity Employer and does not discriminate on the basis of race, religion, color, sex, gender identity or expression, sexual orientation, age, disability, national origin, veteran status, or any other basis covered by appropriate law.

In acknowledgement of the research that suggests that women, gender minorities, and other marginalized groups may be less likely to apply for roles where they don’t meet every criterion, we especially encourage people in these categories to apply.

We may use AI to assist in the initial screening of applications, including to detect whether candidates have used AI models in drafting their application. Decisions are always made by a human on our team.

Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
17 Employees
Year Founded: 1974

What We Do

The Cambridge Boston Alignment Initiative (CBAI) is a nonprofit research organization working to advance research and education directed towards ensuring that society navigates a safe and beneficial transition to advanced AI systems.

Similar Jobs

Datadog Logo Datadog

Sales Development Representative

Artificial Intelligence • Cloud • Security • Software • Cybersecurity
Easy Apply
Hybrid
Boston, MA, USA
6500 Employees
50K-50K Annually

Navan Logo Navan

Systems Engineer

Fintech • Information Technology • Payments • Productivity • Software • Travel • Automation
Easy Apply
Hybrid
Boston, MA, USA
3300 Employees

Samsara Logo Samsara

Sr. Manager Account Based Marketing

Artificial Intelligence • Cloud • Computer Vision • Hardware • Internet of Things • Software
Easy Apply
Remote or Hybrid
United States
4000 Employees
116K-196K Annually

AcuityMD Logo AcuityMD

Solutions Engineer

Healthtech • Software
Easy Apply
In-Office or Remote
2 Locations
250 Employees

Similar Companies Hiring

GC AI Thumbnail
Artificial Intelligence • Legal Tech
San Mateo, California
100 Employees
Idler Thumbnail
Artificial Intelligence
San Francisco, California
6 Employees
Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account