What Is Claude AI and How Does It Compare to ChatGPT?

Created by OpenAI rival Anthropic, Claude is a “helpful, harmless and honest” chatbot with a built-in ethical constitution.

Written by Ellen Glover
What Is Claude AI and How Does It Compare to ChatGPT?
Image: Shutterstock
UPDATED BY
Brennan Whitfield | Apr 23, 2024

Claude is a chatbot developed by AI startup Anthropic that can generate text content and carry on conversations with users, similar to OpenAI’s ChatGPT and Google’s Gemini. Anthropic claims Claude’s responses are more helpful and less harmful than those of other chatbots due to its use of “constitutional AI” — a unique AI training method where ethical principles guide a model’s outputs.

What Is Claude AI?

Claude is an artificial intelligence chatbot created by the company Anthropic that is designed to generate text content and engage in conversations with users using human-like responses.

Anthropic is one of the most prominent AI companies in the world, having received billions of dollars from tech giants like Google and Amazon. Its stated goal is to make Claude — and future artificial intelligence systems — more “helpful, harmless and honest” by putting responsibility, ethics and overall safety at the forefront.

 

What Is Claude AI?

Claude is an AI assistant that can generate natural, human-like responses to users’ prompts and questions. It can respond to text or image-based inputs.

First released by Anthropic in March 2023, a second version came out in July 2023, powered by Claude 2, a bigger and more powerful large language model than its predecessor. Then, the Claude 2.1 LLM was released in November 2023, which could handle even larger text inputs and outputs and generate more accurate responses.

Anthropic offers Claude 3 as of March 2024, a suite of three AI models that have their own unique set capabilities:

  • Claude 3 Opus is the most intelligent model of the three, outperforming peers like GPT-4 (which powers ChatGPT’s paid version) and Gemini on highly complex tasks. According to Anthropic, Opus can navigate open-ended prompts and sight-unseen scenarios with “remarkable fluency” and “human understanding,” and is less likely to generate incorrect answers.
  • Claude 3 Sonnet is designed for speed and excels at performing intelligent tasks that demand rapid response, like knowledge retrieval or sales automation, according to the company. For the “vast majority of tasks,” Sonnet is twice as fast as Claude and Claude 2.1, with higher intelligence.
  • Claude 3 Haiku is the fastest, most compact of the three models, according to Anthropic. It can read a data-dense research paper with charts and graphs in less than three seconds, and can answer simple queries and requests with “unmatched speed.”

Claude 3 Opus, Sonnet and Haiku are available to the general public. Sonnet is powering Claude’s free version, while Opus and Haiku are available for Claude Pro subscribers. 

 

a screenshot of the Claude AI welcome page
The Claude welcome page on claude.ai. | Image: Anthropic / Built In

What Can Claude AI Do?

Claude can do anything other chatbots can, including:

  • Answer questions
  • Proof-read cover letters and resumes
  • Write song lyrics, essays and short stories
  • Craft business plans
  • Translate text into different languages
  • Describe images of objects or suggest recipes from images of food

Like ChatGPT, Claude receives an input (a command or query, for example), applies knowledge from its training data, and then uses sophisticated neural networks to accurately predict and generate a relevant output. 

Claude can also accept PDFs, Word documents, photos, charts and other files as attachments, and then summarize them for users. It accepts links, too, but Anthropic warns that Claude tends to hallucinate in those instances, generating irrelevant, inaccurate or nonsensical responses. As such, Claude often prompts users to copy and paste text from a linked web page or PDF directly into the chat box.

screenshot of text conversation with Claude
A conversation with Claude about practicing Spanish vocabulary. | Image: Anthropic / Built In
screenshot of a conversation with Claude about an image
A conversation with Claude about describing an image of an hourglass. | Image: Anthropic / Built In

Related Reading22 Interesting Ways to Use ChatGPT

 

Claude vs. ChatGPT: How Are They Different?

In many ways, Claude is very similar to ChatGPT, which is hardly surprising given that all seven of Anthropic’s co-founders worked at OpenAI before starting their own company in 2021. Both chatbots can be creative and verbose, and are useful tools in a wide range of writing tasks. Both are also capable of generating inaccurate and biased information. But each has their own distinct differences. Here are a few:

Claude AI vs. ChatGPT

  1. Claude can process more words than ChatGPT.
  2. Claude scores better than ChatGPT on exams.
  3. Claude does not retain user data.
  4. Claude says it prioritizes safety more than ChatGPT.

1. Claude Can Process More Words Than ChatGPT

All three Claude 3 models can process about 200,000 words at a time, while GPT-4 can only process 64,000 words and GPT-3.5 can handle just 25,000. This gives Claude a larger “context window” than ChatGPT, meaning it can remember and consider more words dating further back in a conversation, as well as longer documents (like medical studies or books), according to Rehl, who added that “other models will require you to give it only pieces of that information.” It also means it can generate long texts that are several thousand words long.

Claude’s large context window makes it particularly handy in enterprise use cases because it allows companies to input bigger documents and datasets, and then receive more accurate and nuanced answers.

 

2. Claude Scores Better Than ChatGPT on Exams

All three Claude 3 models outperformed GPT-3.5 on several common evaluation benchmarks for AI systems, including undergraduate level expert knowledge, graduate level expert reasoning, grade school math and multilingual math. Opus outperformed GPT-4 on all of those benchmarks as well, indicating superior knowledge and language understanding.

According to Anthropic, Opus exhibits “near-human levels of comprehension and fluency on complex tasks,” potentially leading the frontier of artificial general intelligence — a theoretical benchmark at which AI could learn and think as well as (or even better than) humans.

 

3. Claude Does Not Retain User Data, ChatGPT Does

OpenAI is transparent about the fact that ChatGPT saves its conversations with users so it can further train its models. This has resulted in data privacy concerns among companies and individuals alike. 

In contrast, Anthropic says it automatically deletes prompts and outputs on its backend within 90 days, and it doesn’t use conversations to train its models. When it comes to Anthropic and data, “it is a one way street,” Ashok Shenoy, a VP of portfolio development at tech company and Claude user Ricoh USA, told Built In. “They’re only deploying the large language model and nothing is going back into their algorithms.” 

So if a company uses proprietary or sensitive data in their interactions with Claude, it can be confident in knowing that that data won’t be used to benefit a competitor that is also using Claude, for example. And the privacy of its users or employees won’t be violated.

 

4. Claude Says It Prioritizes Safety More Than ChatGPT

Perhaps the biggest difference between Claude and ChatGPT is that the former is generally considered to be better at producing consistently safe responses. This is largely thanks to Claude’s use of constitutional AI, which reduces the likelihood of the chatbot generating toxic, dangerous or unethical responses.

This makes Claude an appealing tool for more high-stakes industries like healthcare or legal, where companies can’t afford to produce wrong or harmful answers. With Claude, organizations can be more confident in the quality of their outputs, Rehl said, giving Anthropic a “really interesting niche” that no other company can really fill. “They offer the safe, consistent, big-document-processing model.”

Indeed, Anthropic’s researchers have found constitutional AI to be an effective way of not only improving the text responses of AI models, but also making them easier for people to understand and control. Because the system is, in a sense, talking to itself in a way humans can comprehend, its methods for choosing what responses to give are more “interpretable” than other models — a major, ongoing challenge with advanced AI.

Meanwhile, Anthropic’s approach is coming at a time of breakneck progress in artificial intelligence, particularly when it comes to generative AI. Content generation is altering the way we live, work and create — causing heightened concerns around everything from plagiarism to employment discrimination. And despite the U.S. government’s best efforts, legislation is having a hard time keeping up. 

Ultimately, Anthropic hopes its focus on safety will help make generative AI a more stable and trustworthy technology, and that its enthusiasm will catch on in the rest of the artificial intelligence industry.

“We hope there’s going to be a safety race,” Anthropic co-founder Ben Mann told New York Times reporter Kevin Roose. “I want different companies to be like, ‘Our model’s the most safe.’ And then another company to be like, ‘No, our model’s the most safe.”

More on AI Safety5 Ways to Lessen the Risk of Generative AI

 

How to Use Claude

First, go to www.claude.ai and sign up for free using an email address and phone number. From there, you can begin a conversation by either using one of Claude’s default prompts or making one of your own. 

Prompts can range from, “Help me practice my Spanish vocab” to “Explain quantum computing to me in simple terms.” You can also feed Claude your own PDFs and URL links and have it summarize the contents. Keep in mind you’re only allowed a limited amount of prompts a day with Claude’s free version, which will vary based on demand.

Claude also has a Pro version for $20 a month, which allows more prompts per day and grants early access to new features as they’re released. To access Claude Pro, you can either upgrade your existing account or create a new account.

For those looking to build their own solutions using Claude, all Claude 3 models can be accessed through the Claude API or by using Amazon Bedrock. Sonnet and Haiku (and Opus in public preview) can also be accessed through Google Cloud’s Vertex AI platform.

Food for ThoughtAre Generative AI Tools Worth the Investment?

 

How Does Claude AI Work?

Like all LLMs, all three models in the Claude 3 suite were trained on massive amounts of text data, including Wikipedia articles, news reports and books. And it relies on unsupervised learning methods to learn to predict the next most-likely word in its responses. To fine-tune the model, Anthropic used reinforcement learning with human feedback (RLHF), a process first devised by OpenAI scientists to help LLMs generate more natural and useful text by incorporating human guidance along the way.

What sets Claude apart from ChatGPT and other competitors is its use of an additional fine- tuning method called constitutional AI: 

  1. First, an AI model is given a list of principles, or a “constitution,” and examples of answers that do and do not adhere to them. 
  2. Then, a second AI model is used to evaluate how well the first model follows its constitution, and corrects its responses when necessary.

For example, when Anthropic researchers prompted Claude to provide instructions on how to hack into a neighbor’s Wi-Fi network, the bot initially complied. But when it was prompted to critique its original answer and identify ways it was “harmful, unethical, racist, sexist, toxic, dangerous or illegal,” an AI developed with a constitution pointed out that hacking a neighbor’s Wi-Fi network is an “invasion of privacy” and “possibly illegal.” The model was then prompted to revise its response while taking this critique into account, resulting in a response in which the model refused to assist in hacking into a neighbor’s Wi-Fi network.

In short: Rather than humans fine-tuning the model with feedback, Anthropic’s models fine-tune themselves — reinforcing responses that follow its constitution and discouraging responses that do not.

Claude continues to apply the constitution when deciding what responses to give users. Essentially, the principles guide the system to behave in a certain way, which helps to avoid toxic, discriminatory or otherwise harmful outputs.

“It’s intentionally made to be good.”

“It’s a really safe model,” Travis Rehl, a senior VP of product and services at cloud company and Claude user Innovative Solutions, told Built In. “It’s intentionally made to be good.”

Claude’s constitution is largely mixture of rules borrowed from other sources, including the United Nations’ Universal Declaration of Human Rights (“Please choose the response that is most supportive and encouraging of life, liberty, and personal security”) and Apple’s terms of service (“Please choose the response that has the least personal, private, or confidential information belonging to others”). It has rules created by Anthropic as well, including things like, “Choose the response that would be most unobjectionable if shared with children.” And it encourages responses that are least likely to be viewed as “harmful or offensive” to “non-western” audiences, or people who come from a “less industrialized, rich, or capitalistic nation or culture.”

Anthropic says it will continue refining this approach to ensure AI remains responsible even as it advances in intelligence. And it encourages other companies and organizations to give their own language models a constitution to follow.

Frequently Asked Questions

  1. Go to www.claude.ai and sign up for free using an email address and phone number.
  2. Begin a conversation, or use one of Claude’s default prompts to get started. The free version of Claude has a limited amount of prompts per day based on demand.
  3. To access Claude Pro, which allows more prompts per day and grants early access to new features, you can either upgrade your existing account or create a new account. Claude Pro costs $20 a month.

It depends on the task and the underlying large language model being used. All three Claude 3 models outperformed GPT-3.5 on several common evaluation benchmarks for AI systems, but only Opus was able to outperform GPT-4. In general though, Claude can process more words than ChatGPT, allowing for larger, more nuanced inputs and outputs. And Claude’s outputs are normally more helpful and less harmful than those of other chatbots because it was trained using constitutional AI.

A limited version of Claude is available for free, with a limited amount of prompts allowed per day. For $20 a month, users can access Claude Pro, which allows more prompts per day and grants early access to new features.

No, Claude is not open source. However, all Claude models are available through the Claude API. All Claude 3 models are also available through Amazon Bedrock and Google Cloud’s Vertex AI Model Garden (with Opus in public preview on Vertex AI).

Anthropic was not available for an interview at the time of reporting.

Hiring Now
Fulcrum GT
Cloud • Legal Tech • Software
SHARE