The Cambridge Boston Alignment Initiative (CBAI) is a nonprofit research organization working to advance research and education directed towards ensuring that society navigates a safe and beneficial transition to advanced AI systems. Our work takes the form of producing original research efforts and accelerating AI safety research through fellowship programs.
After a successful 2025 launch of our AI Safety Research Fellowship — whose inaugural cohort published a spotlight paper at the Mechanistic Interpretability Workshop at NeurIPS, placed papers at ICLR, and produced fellows now at Goodfire and Redwood Research — we're rapidly scaling in 2026. As part of this growth, we are launching the inaugural AIxBio Research Fellowship, bringing the same high-touch, mentor-driven research model to the intersection of AI and biosecurity.
Refer us candidates, and receive $5,000 if we hire them.
The RoleYou'll work closely with fellows and their mentors, biosecurity researchers, AI safety researchers, and domain experts from academia and industry, to support cutting-edge work at the intersection of AI and biosecurity. This might include capability evaluations on biological systems, regulatory frameworks for mitigating dual-use research risks, or other research agendas at the frontier of AI-enabled biosecurity threats. We are open to hiring Research Managers with technical research experience or policy and governance research experience. For the right fit, we are open to adjusting the scope and compensation of this role.
Research Management (0.7 FTE)Conduct frequent 1-1s with your fellows, providing feedback on research progress and helping them overcome obstacles, including debugging research designs, preparing literature scaffolds, and supporting data collection, analysis, and methodology development
Provide substantive feedback on fellow research and help CBAI create an environment that nudges fellows toward rigor and clarity
Connect fellows with resources, literature, and opportunities relevant to AI x biosecurity during and after the fellowship
Communicate with fellows' mentors to define clear research objectives and support fellows' research progression
Contribute to fellow selection by reviewing and interviewing candidates to ensure the cohort has strong epistemics and a relevant background
Contribute to the design of reading groups, workshops, and supplementary programming for the AIxBio cohort
Support special projects aligned with your strengths, such as applicant selection, evaluation frameworks, or mentor onboarding
Meet weekly with program leadership to enhance feedback loops and continuously improve the program
Stay current on developments at the intersection of AI and biosecurity relevant to your fellows' work
Prepare periodic briefs on recent developments in the field for fellows
We expect you to be characterized by most of the qualities listed below.
Experience supporting complex intellectual work. You have helped others execute complex analytical or research projects — through teaching, managing technical teams, conducting your own research, consulting, or coordinating academic programs.
You understand the research process from the inside. You've done substantial analytical work yourself, whether in academia, policy analysis, consulting, or industry research. You can recognize when a research plan is solid vs. hand-wavy, identify blockers in someone's thinking, and suggest concrete next steps.
Familiarity with AI x biosecurity. You have meaningful exposure to biosecurity, AI safety, or the intersection of the two — through research, policy work, or adjacent professional experience. You don't need to be a specialist, but you need enough context to engage substantively with the research your fellows are doing.
Skilled at developmental feedback. You know how to give constructive criticism that moves work forward. You can help people strengthen their arguments, tighten their methodology, and present their findings clearly — even when the specific topic is outside your core expertise.
Excellent communicator. You explain complex concepts clearly and give constructive feedback effectively. You check in proactively when you're unsure about something or notice a potential problem.
Interpersonally excellent. You have genuine empathy, active listening skills, and a servant leadership approach. You enjoy helping others succeed and take pride in their accomplishments.
Mission-motivated. You are strongly aligned with CBAI's mission and care about the biosecurity risks posed by transformative AI systems. You want to contribute meaningfully to this space and are passionate about building the field.
Organized and conscientious. You keep complex projects on track, follow through reliably, and maintain clear communication. You're receptive to feedback and continuously improve your approach.
The ideal candidate will have a bachelor's degree or higher in a relevant field, Computer Science, Biology, or related disciplines, combined with research or policy experience demonstrating strong methodological knowledge and a genuine interest in AI x biosecurity.
Nice to HavesPublished research or policy work in biosecurity, AI safety, or adjacent areas
Previous involvement in AI safety, biosecurity, or field-building programs
Experience managing research programs or academic initiatives
However, there is no such thing as a "perfect" candidate. If you are on the fence about applying because you are unsure whether you are qualified, we strongly encourage you to apply.
Why This Role May Not Be the Right FitWe want to be transparent about what this position entails so you can make an informed decision about whether it's right for you:
You're supporting others' research. Your primary job is helping fellows execute their research projects, not shaping research strategy. Mentors and fellows own the research direction. In some cases, with mentor and fellow approval, you may contribute substantially enough to merit co-authorship, but this is a facilitation role first.
Your success is measured by fellows' accomplishments. When fellows publish papers, secure positions at top labs or policy organizations, or advance in their careers, that's your win. If you're primarily motivated by building your own research reputation, this role won't be satisfying.
Workload varies significantly by fellowship cycle. Active fellowship periods are intense; between cycles, the pace slows considerably. If you strongly prefer steady, predictable workloads, this variability may be challenging.
If this sounds exciting to you, and if you want to spend at least a year becoming excellent at supporting talented researchers working on one of the most important emerging risk areas, and contributing to building the Cambridge biosecurity research ecosystem, this role could be a great fit.
Role Details and BenefitsTeam: You'll report to the AIxBio Program Lead.
Salary: $100,000 – $115,000, depending on experience. For exceptional candidates, we are flexible on compensation.
We also provide:
5% 403(b) match contribution
Comprehensive health insurance
Generous PTO policy
Meals provided during weekdays
Employer-paid commuter benefits
Reimbursement for work-related technology and/or home office expenses
U.S. work authorization required (we accept OPT).
Location: This position is primarily based in Cambridge, MA. While we expect you to spend most of your time working in-person from our Harvard Square office (particularly during active fellowship cycles), we can offer some hybrid flexibility between fellowship cohorts for candidates with specific circumstances. In those cases, we can give you access to AI safety co-working spaces in Berkeley and NYC.
Start date: May 2026
Selection ProcessWe use a multi-stage process to find the right fit:
Application Review: We review applications on a rolling basis. Your application will be reviewed in detail by a CBAI employee.
Initial Phone Screen (15 minutes): A conversation with the team manager to discuss your background, interest in the role, and initial questions.
Paid Test Task: Strong candidates will receive a paid test task mirroring actual responsibilities, such as providing feedback on a research proposal, designing a program component, or creating an evaluation framework.
Interview: Includes discussion of your test task, a fellowship scenario case study, and a conversation with CBAI team members and potentially a mentor.
Reference Checks: Conducted for top finalists, followed by a final conversation to ensure mutual fit.
Offer: Selected candidates receive an offer and onboarding information.
CBAI is an Equal Opportunity Employer and does not discriminate on the basis of race, religion, color, sex, gender identity or expression, sexual orientation, age, disability, national origin, veteran status, or any other basis covered by appropriate law.
In acknowledgement of the research that suggests that women, gender minorities, and other marginalized groups may be less likely to apply for roles where they don’t meet every criterion, we especially encourage people in these categories to apply.
We may use AI to assist in the initial screening of applications, including to detect whether candidates have used AI models in drafting their application. Decisions are always made by a human on our team.
What We Do
The Cambridge Boston Alignment Initiative (CBAI) is a nonprofit research organization working to advance research and education directed towards ensuring that society navigates a safe and beneficial transition to advanced AI systems.









