Anthropic: What We Know About the Company Behind Claude AI

Anthropic is bringing safety and interpretability to the generative AI space. Get to know the history behind Anthropic, what it does, its Claude models and how it differs from OpenAI.

Written by Ellen Glover
A photo of an smartphone with Anthropic's Claude chatbot pulled up on its screen.
Image: Shutterstock
UPDATED BY
Matthew Urwin | Nov 20, 2024

Anthropic is an artificial intelligence research and development company founded in 2021. Its stated goal is to responsibly advance the field of generative AI, deploying safe and reliable AI models for public use. Anthropic’s flagship products include a chatbot and a family of large language models (LLMs), both named Claude.

What Is Anthropic?

Anthropic is an artificial intelligence company that makes the AI chatbot Claude. It also conducts artificial intelligence research and development with a particular focus on safety and interpretability.

With substantial backing from tech giants like Google and Amazon — and a reported valuation exceeding $18 billion — Anthropic has emerged as an industry leader when it comes to AI safety, and plays a key role in shaping AI policy in the United States. Its top LLM is also widely considered to be among the most capable on the market.

 

What Is Anthropic?

Anthropic is an AI research and development startup known for its Claude chatbot and LLMs. The company also focuses on safety and ethics, encouraging responsible innovation across the artificial intelligence industry. 

“Anthropic is about a purpose-driven usage of technology,” Chris Dessi, tech founder and author of ChatGPT for Profit: A Beginner’s Guide to Leveraging AI in Business, told Built In. “OpenAI brought AI to the masses, and Anthropic is making it a little bit more responsible.”

To that end, Anthropic studies and develops some of the most powerful AI systems in the world — right alongside competitors like GoogleOpenAI and Microsoft — but in a more deliberate manner. While other companies rush to put out AI products as fast as they can, Anthropic chooses not to release models above certain capability thresholds until it can develop sufficiently robust safety measures

“Anthropic truly is a different animal altogether,” ​​Mike Finley, CTO generative AI analytics company AnswerRocket told Built In. “They’re willing to hold off, they’re willing to hold back.”

More on AI RegulationAI Bill of Rights: What You Should Know

 

History of Anthropic

Anthropic was founded in 2021 by siblings and former OpenAI executives Dario Amodei and Daniela Amodei. Along with five other colleagues, they left OpenAI in 2020 due to concerns over OpenAI’s lack of commitment to safety. They launched Anthropic as a public benefit corporation, which is legally required to prioritize generating a positive social impact over profit. For Anthropic, this means building “reliable, interpretable and steerable AI systems” and conducting “frontier research” on AI safety, according to its mission statement.

 

What Does Anthropic Do?

As an AI research and development company, Anthropic not only designs and develops its own products, but it also places a special emphasis on safety, interpretability and societal impact. 

Claude

Claude is a chatbot developed by Anthropic that generates natural, human-like responses to users’ prompts. It can carry on conversations, create written content, translate text into different languages and more. It is also multimodal, meaning it accepts both text and images as inputs. Claude can be powered by any one of the LLMs in the Claude model family at a time, depending on whether the user is a Claude Pro subscriber. 

Following the announcement of the Claude 3.5 model family, Anthropic released an upgraded version of Claude 3.5 Sonnet and Claude 3.5 Haiku. In addition, Claude now comes with a “Computer Use” feature, enabling it to use computers like humans. It can perform tasks like analyzing data, interacting with interfaces and accessing folders and applications.

Anthropic also offers an API, which enables users to build their own products using the Claude models, and a suite of tools that support different aspects of AI development like prompt engineering and model training. 

Constitutional AI

To help develop safer and more trustworthy language models, Anthropic devised a training method called constitutional AI, where ethical principles are used to guide a model’s output. The process involves two steps: supervised learning and then reinforcement learning.

During the supervised learning step, a model compares its own outputs to a pre-set list of guiding principles, or a “constitution.” The model then revises its response so that it more closely adheres to the constitution and is fine-tuned on those responses.

During the reinforcement learning step, the model undergoes a similar process, only this time its outputs are evaluated and revised by a second model. The data collected during this phase is then used to fine-tune the initial model, ideally teaching it to avoid harmful responses without exclusively relying on human feedback.

While Anthropic’s AI models are still capable of producing biased and inaccurate answers, constitutional AI is “definitely accepted as one of the strongest ways to deal with this,” Alex Strick van Linschoten, a machine learning engineer at ZenML, told Built In.

Interpretability Research

A major part of Anthropic’s research efforts involves trying to understand exactly how and why AI models make the decisions they do, which is an ongoing challenge in the industry. Many AI systems aren’t explicitly programmed, but use neural networks to learn how to speak, write, make predictions, perform arithmetic and much more. How exactly they arrive at those outputs remains a mystery.

Anthropic researchers have made breakthroughs on this front. In 2024 they reverse-engineered Claude 3 Sonnet, which allowed them to understand and control the LLM’s behavior — a discovery that can help address current AI safety risks and enhance the safety of future AI models

Societal and Ethical Implications

In line with its emphasis on safety, Anthropic dedicates research toward questions surrounding what values should be kept in mind during AI development, potential risks and abuses and other topics. To help narrow down this list of concerns, the company’s Societal Impacts team considers issues that are of interest to policymakers.

Anthropic has continued to explore the latest boundaries of the AI field by hiring its first AI welfare researcher. This initiative is part of an effort to further investigate the possibility of AI developing sentience, what this could mean for society and what ethical dilemmas this may pose for AI companies. 

More on Interpretable AIExplainable AI, Explained

 

Claude Models

Here’s a closer look at the five models available to Claude Pro users:

Claude 3 Haiku

Claude 3 Haiku demonstrates plenty of speed and processing capacity. For prompts 32K tokens or less, Claude 3 Haiku can process roughly 30 pages per second. It can even handle 2,500 images or 400 Supreme Court cases for just $1. Because it’s designed for longer prompts, Claude 3 Haiku is ideal for enterprise use cases, especially customer service.

Claude 3 Sonnet

Claude 3 Sonnet is described as achieving “the ideal balance between intelligence and speed — particularly for enterprise workloads.” It may not be the fastest Claude model available, but its well-rounded abilities make it a reliable option for applying Claude on a larger scale. Claude 3 Sonnet is also affordable compared to its peers.

Claude 3 Opus

Claude 3 Opus possesses a greater degree of intelligence than the other Claude 3 models, addressing brand-new use cases and open-ended prompts on the level of humans. In addition, Claude 3 Opus scored over 99 percent accuracy on the Needle In a Haystack evaluation, displaying near-perfect recall.

Claude 3.5 Haiku

Claude 3.5 Haiku is considered to be Anthropic’s “fastest model,” delivering timely coding advice, leveraging improved conversational skills and moderating content in real time. This makes it useful for coding teams, customer service contexts and social platforms alike. It can also aid businesses in extracting data from large data sets.

Claude 3.5 Sonnet

Claude 3.5 Sonnet can be viewed as Anthropic’s most intelligent model. It can draw advanced insights from data, fix its own mistakes and understand context — all while moving at double the speed of Claude 3 Opus. Paired with Claude’s API, Claude 3.5 Sonnet can also power Claude’s “Computer Use” feature for even more complex use cases.

 

Anthropic vs. OpenAI

Anthropic and OpenAI are two of the most prominent companies working to advance the field of artificial intelligence. But they are going about it in different ways.

Different Corporate Structures

Originally founded as a non-profit, OpenAI switched to a “capped profit” model in 2019, making it easier to raise venture capital and grant employees a stake in the company. The company has leaned further toward becoming fully for-profit, with its board on the verge of converting OpenAI into a public benefit corporation.  

Anthropic is a public benefit corporation, which means its board is legally required to balance private and societal interests, as well as regularly report on how it promotes public benefits to its owners. Failure to comply with these conditions can trigger shareholder litigation.

Anthropic is also governed by a long-term benefit trust (LTBT) — a structure the company developed that gives an independent body of five financially disinterested people the authority to select and remove a portion of board members based on their willingness to act in accordance with the company’s mission to “responsibly develop and maintain advanced AI for the long-term benefit of humanity.”

This approach is designed to ensure that Anthropic’s board remains focused on the company’s purpose as a whole, not just its bottom line. It also means that major investors like Amazon and Google can help build the company without having the power to steer the ship entirely.

More on Artificial IntelligenceExplore Built In’s AI Coverage

Different Approach to AI Safety

Like most AI developers, OpenAI primarily relies on reinforcement learning from human feedback (RLHF) to train its models, where the model receives guidance and corrections from humans. This method is helpful in reducing harmful outputs and generating more accurate responses, but it is far from perfect, as humans can make mistakes and unconsciously inject their own biases. Plus, these models are scaling so rapidly that it can be hard for humans to keep up. 

Anthropic bakes safety directly into the design of its LLMs using constitutional AI. The company also established several committees to tackle various AI safety concerns, including interpretability, security, alignment and societal impacts. It even has an in-house framework called AI Safety Levels to handle some of the more catastrophic risks associated with artificial intelligence. Among other things, the framework restricts the scaling and deployment of new models if their capabilities exceed their ability to follow safety protocols.

Similar Model Performance

Anthropic and OpenAI’s models are comparable to one another. Their models designed for speed (Claude 3 Haiku and GPT-3.5 Turbo) are comparable in performance, and their more intelligent models (Claude 3 Opus and GPT-4) also perform similarly to one another.

That said, Anthropic claims its most advanced model, Claude 3.5 Sonnet, outperforms OpenAI’s most advanced model, GPT-4o, on several standard evaluation benchmarks for AI systems — including reasoning over text, code, graduate-level reasoning and multilingual math. While this technically suggests that Anthropic’s model has superior knowledge and language understanding, the margins were fairly slim. And both companies are constantly making improvements.

“It’s hard to say one is smarter than the other,” Finley said. “But I think that the report card would say that Claude is safer, less likely to hallucinate and is more likely to tell you when it doesn’t know [an answer].”

Frequently Asked Questions

Anthropic designs and develops its own AI products and conducts research to enhance the safety and interpretability of AI systems as a whole.

While Anthropic and OpenAI are both working to advance the field of artificial intelligence, they are going about it in very different ways. OpenAI is primarily focused on developing models that push the limits of AI capabilities, ideally bringing us closer to artificial general intelligence. Anthropic develops highly sophisticated language models too, but it does it in a way that prioritizes safety, working to ensure that its products (and future AI systems) are developed and deployed in ways that minimize risks and maximize public welfare. Plus, Anthropic operates as a public-benefit corporation, meaning it is legally required to balance profit motives with generating a positive social impact, while OpenAI has transitioned to a more traditional for-profit structure.

According to Anthropic, Claude 3.5 Sonnet (a model that powers Claude) outperformed GPT-4o (a model that powers ChatGPT) on several common industry benchmarks. While this may technically indicate that Claude has superior knowledge and language understanding to ChatGPT, the margins were fairly slim. And both Anthropic and OpenAI are constantly making improvements to their models.

Anthropic was co-founded and is currently owned by siblings Dario Amodei and Daniela Amodei. Amazon and Alphabet also have large investments in the company. In addition, Anthropic is governed by a board of directors and an independent body called the Long-Term Benefit Trust.

Anthropic did not respond to requests to be interviewed for this story.

Explore Job Matches.