Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
You want to help construct and rapidly iterate on machine learning experiments to help us improve the behavior of powerful AI systems through finetuning. You care about making AI helpful, honest, and harmless, and are interested in shaping model behavior to be more aligned with human values and goals. You could describe yourself as both a scientist and an engineer. As a Research Scientist or Research Engineer on the Finetuning team, you'll contribute to research on improving language models through techniques like constitutional AI. You will have the opportunity to do creative, cutting-edge research on frontier models, and to see your work result in concrete improvements in performance and safety.
We generally expect research scientists to be able to iterate on their own experiments. We also provide opportunities for engineers to pursue their own research projects. Therefore this role can be more research oriented or more engineering oriented, depending on the experience and interests of the candidate.
Note: Currently, the team has a preference for candidates who are able to be based in the Bay Area. However, we remain open to any candidate who can meet the organization's 25% in-person policy.
Representative projects:
- Help develop novel finetuning techniques to improve language model behavior and make models more helpful, honest, and harmless
- Test out techniques like constitutional AI at scale and measure their impacts on model behavior
- Build tooling and infrastructure to enable efficient fine-tuning experiments on large language models
- Develop novel prompts and prompting strategies to improve and test model behaviors
- Run experiments that feed into key AI research and safety efforts at Anthropic
You may be a good fit if you:
- Have significant Python, machine learning, research engineering, or research experience
- Prefer fast-moving collaborative projects with concrete goals that involve improving model behaviors
- Are results-oriented, with a bias towards flexibility and impact
- Pick up slack, even if it goes outside your job description
- Care about the impact of AI and of your work
Strong candidates may also:
- Haver prior experience with large language model finetuning techniques such as RLHF
- Have experience with complex shared codebases and RL infrastructure
- Have experience authoring research papers in machine learning, NLP, or AI alignment or similar industry experience
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The expected salary range for this position is:
Annual Salary:
$280,000—$625,000 USD
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.
Top Skills
What We Do
Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems. Our research interests span multiple areas including natural language, human feedback, scaling laws, reinforcement learning, code generation, and interpretability.