Anthropic: What We Know About the Company Behind Claude AI

Anthropic is bringing safety and interpretability to the generative AI space.

Written by Ellen Glover
Published on Jul. 22, 2024
A photo of an smartphone with Anthropic's Claude chatbot pulled up on its screen.
Image: Shutterstock

Anthropic is an artificial intelligence research and development company founded in 2021. Its stated goal is to responsibly advance the field of generative AI, deploying safe and reliable AI models for public use. Anthropic’s flagship products include a chatbot named Claude and a family of large language models (LLMs), also named Claude.

What Is Anthropic?

Anthropic is an artificial intelligence company that makes the AI chatbot Claude. It also conducts artificial intelligence research and development with a particular focus on safety and interpretability.

With substantial backing from tech giants like Google and Amazon — and a reported valuation exceeding $18 billion — Anthropic has emerged as an industry leader when it comes to AI safety, and plays a key role in shaping AI policy in the United States. Its top LLM is also widely considered to be among the most capable on the market.

 

What Is Anthropic?

Anthropic is an AI research and development startup founded in 2021 by siblings and former OpenAI executives Dario Amodei and Daniela Amodei. In addition to offering its Claude chatbot and LLMs, Anthropic focuses on safety and ethics, setting new standards for responsible innovation across the artificial intelligence industry. 

Anthropic’s founders, along with five other colleagues, left OpenAI in 2020 due to their concerns over OpenAI’s lack of commitment to safety. They launched Anthropic as a public-benefit corporation (PBC), which is legally required to prioritize generating a positive social impact in addition to profit. For Anthropic, this means building “reliable, interpretable and steerable AI systems” and conducting “frontier research” on AI safety, according to its mission statement.

“Anthropic is about a purpose-driven usage of technology,” Chris Dessi, tech founder and author of ChatGPT for Profit: A Beginner’s Guide to Leveraging AI in Business, told Built In. “OpenAI brought AI to the masses, and Anthropic is making it a little bit more responsible.”

"Anthropic is truly a different animal altogether."

To that end, Anthropic studies and develops some of the most powerful AI systems in the world — right alongside competitors like Google, OpenAI and Microsoft — but in a more deliberate manner. While other companies rush to put out AI products as fast as they possibly can, Anthropic voluntarily restrains itself, choosing not to release models above certain capability thresholds until it can develop sufficiently robust safety measures. 

“Anthropic truly is a different animal altogether,” ​​Mike Finley, CTO generative AI analytics company AnswerRocket told Built In. “They’re willing to hold off, they’re willing to hold back.”

The company’s efforts come at a consequential time for artificial intelligence. Generative AI products are altering the way we live, work and create, while also causing heightened concerns around everything from plagiarism to disinformation

Ultimately, Anthropic aims to make generative AI a more stable and trustworthy technology through its focus on AI safety, hoping to encourage other AI companies to adopt similar commitments and drive stronger government regulations going forward.

“They want to make safe models, but they want everybody else to make safe models too,” Finley said. “They’re trying to raise that bar.”

More on AI RegulationAI Bill of Rights: What You Should Know

 

What Does Anthropic Do?

As an AI research and development company, Anthropic not only designs and develops its own products, but it also works to advance the artificial intelligence field as a whole, particularly when it comes to safety and interpretability. 

Claude

Claude is a chatbot developed by Anthropic that generates natural, human-like responses to users’ prompts. It can carry on conversations, create written content, translate text into different languages and more. It is also multimodal, meaning it accepts both text and images as inputs.

Claude can be powered by any one of the LLMs in the Claude model family at a time, depending on if the user is a Claude Pro subscriber. According to Anthropic:

  • Claude 3 Haiku is the fastest and most compact of the Claude models, and it’s capable of quick and accurate performance on targeted tasks. 
  • Claude 3 Sonnet strikes “the ideal balance” between intelligence and speed, making it particularly good at enterprise workloads.
  • Claude 3 Opus displays “top-level performance, intelligence, fluency and understanding” on various open-ended prompts and “sight-unseen scenarios,” outperforming peers like GPT-4 and Gemini on most of the common evaluation benchmarks for LLMs.
  • Claude 3.5 Sonnet is Anthropic’s most intelligent model to date. Anthropic says the model can grasp “nuance, humor and complex instructions” is “exceptional” at writing high-quality content with a “natural, relatable tone” and shows strong agentic coding abilities, meaning it is able to independently write, edit and execute code.

Claude 3.5 is the first release of Anthropic’s forthcoming Claude 3.5 model family. By the end of 2024, the company also plans to release Claude 3.5 Haiku and a Claude 3.5 Opus.

Anthropic also offers an API, which enables users to build their own products using the Claude models.

Constitutional AI

To help develop safer and more trustworthy language models, Anthropic devised a training method called constitutional AI, where ethical principles are used to guide a model’s output. The process — detailed in the paper “Constitutional AI: Harmlessness from AI Feedback” — involves two steps: supervised learning and then reinforcement learning.

During the supervised learning step, a model compares its own outputs to a pre-set list of guiding principles, or a “constitution.” The model then revises its response so that it more closely adheres to the constitution and is fine-tuned on those responses.

During the reinforcement learning step, the model undergoes a similar process, only this time its outputs are evaluated and revised by a second model. The data collected during this phase is then used to fine-tune the initial model, ideally teaching it to avoid harmful responses without exclusively relying on human feedback.

While Anthropic’s AI models are still capable of producing biased and inaccurate answers, constitutional AI is “definitely accepted as one of the strongest ways to deal with this,” Alex Strick van Linschoten, a machine learning engineer at ZenML, told Built In.

Interpretability Research

A major part of Anthropic’s research efforts involves trying to understand exactly how and why AI models make the decisions they do, which is an ongoing challenge in the industry. Many AI systems aren’t explicitly programmed, but use neural networks to learn how to speak, write, make predictions, perform arithmetic and much more. How exactly they arrive at those outputs remains a mystery.

Anthropic researchers have made breakthroughs on this front. In 2024 they reverse engineered Claude 3 Sonnet, which allowed them to understand and control the LLM’s behavior — a discovery that can help address current AI safety risks and enhance the safety of future AI models. 

More on Interpretable AIExplainable AI, Explained

 

Anthropic vs. OpenAI

Anthropic and OpenAI are two of the most prominent companies working to advance the field of artificial intelligence. But they are going about it in different ways.

Different Corporate Structures

Originally founded as a non-profit, OpenAI switched to a “capped profit” model in 2019, making it easier to raise venture capital and grant employees a stake in the company. Still, the company says its for-profit subsidiary is fully controlled by its non-profit charter, and retains a formal fiduciary responsibility to it. Even so, some researchers believe that OpenAI’s for-profit model undermines its claims of “democratizing AI.”

Anthropic is a public-benefit corporation, which means its board is legally required to balance private and societal interests, as well as regularly report on how it promotes public benefits to its owners. Failure to comply with these conditions can trigger shareholder litigation. 

Anthropic is also governed by a long-term benefit trust (LTBT) — a structure the company developed that gives an independent body of five financially disinterested people the authority to select and remove a portion of board members based on their willingness to act in accordance with the company’s mission to “responsibly develop and maintain advanced AI for the long-term benefit of humanity.”

This approach is designed to ensure that Anthropic’s board remains focused on the company’s purpose as a whole, not just its bottom line. It also means that major investors like Amazon and Google can help build the company without having the power to steer the ship entirely.

More on Artificial IntelligenceExplore Built In’s AI Coverage

Different Approach to AI Safety

Like most AI developers, OpenAI primarily relies on reinforcement learning from human feedback (RLHF) to train its models, where the model receives guidance and corrections from humans. This method is helpful in reducing harmful outputs and generating more accurate responses, but it is far from perfect, as humans can make mistakes and unconsciously inject their own biases. Plus, these models are scaling so rapidly that it can be hard for humans to keep up. 

Anthropic bakes safety directly into the design of its LLMs using constitutional AI. The company also established several committees to tackle various AI safety concerns, including interpretability, security, alignment, and societal impacts. Plus, it has an in-house framework called AI Safety Levels to handle some of the more catastrophic risks associated with artificial intelligence. Among other things, the framework restricts the scaling and deployment of new models if their capabilities exceed their ability to follow safety protocols.

Similar Model Performance

In terms of performance, Anthropic and OpenAI’s models are comparable to one another. Their models designed for speed (Claude 3 Haiku and GPT-3.5 Turbo) are comparable in performance, and their more intelligent models (Claude 3 Opus and GPT-4) also perform similarly to one another.

That said, Anthropic claims its most advanced model, Claude 3.5 Sonnet, outperforms OpenAI’s most advanced model, GPT-4o, on several standard evaluation benchmarks for AI systems — including undergraduate level knowledge, code, graduate level reasoning and multilingual math. While this technically suggests that Anthropic’s model has superior knowledge and language understanding, the margins were fairly slim. And both companies are constantly making improvements.

“It’s hard to say one is smarter than the other,” Finley said. “But I think that the report card would say that Claude is safer, less likely to hallucinate and is more likely to tell you when it doesn’t know [an answer].”

Frequently Asked Questions

Anthropic designs and develops its own AI products and conducts research to enhance the safety and interpretability of AI systems as a whole.

While Anthropic and OpenAI are both working to advance the field of artificial intelligence, they are going about it in very different ways. OpenAI is primarily focused on developing models that push the limits of AI capabilities, ideally bringing us closer to artificial general intelligence. Anthropic develops highly sophisticated language models too, but it does it in a way that prioritizes safety, working to ensure that its products (and future AI systems) are developed and deployed in ways that minimize risks and maximize public welfare. Plus, Anthropic operates as a public-benefit corporation, meaning it is legally required to balance profit motives with generating a positive social impact, while OpenAI has transitioned to a more traditional for-profit structure.

According to Anthropic, Claude 3.5 Sonnet (a model that powers Claude) outperformed GPT-4o (a model that powers ChatGPT) on several common industry benchmarks. While this may technically indicate that Claude has superior knowledge and language understanding to ChatGPT, the margins were fairly slim. And both Anthropic and OpenAI are constantly making improvements to their models.

Anthropic did not respond to requests to be interviewed for this story.

Explore Job Matches .