Hal Koss | Feb 13, 2024

Claude is a chatbot developed by AI startup Anthropic that can generate text content and carry on conversations with users, similar to OpenAI’s ChatGPT and Google’s Gemini. Anthropic claims Claude’s responses are more helpful and less harmful than those of other chatbots due to its use of “constitutional AI” — a unique AI training method where ethical principles guide a model’s outputs.

What Is Claude AI?

Claude is an AI chatbot that can generate text content and engage in conversations with users. Similar to ChatGPT, Claude offers a messaging interface where users can submit prompts or questions and receive relevant, human-like responses.

Anthropic is one of the most prominent AI companies in the world, having received billions of dollars from tech giants like Google and Amazon. Its stated goal is to make Claude — and future artificial intelligence systems — more “helpful, harmless and honest” by putting responsibility, ethics and overall safety at the forefront.


What Is Claude AI?

First released by Anthropic in March of 2023, Claude is an AI assistant that can generate natural, human-like responses to users’ prompts and questions. A second version came out in July of 2023, powered by Claude 2, a bigger and more powerful large language model than its predecessor. And in November of 2023, Anthropic released the Claude 2.1 LLM, which is available as an API and powers Claude’s free and paid versions. It can handle larger text inputs and outputs, and is capable of generating more accurate responses, according to Anthropic.


How Does Claude AI Work?

Like all LLMs, Claude 2.1 was trained on massive amounts of text data, including Wikipedia articles, news reports and books. And it relies on unsupervised learning methods to learn to predict the next most-likely word in its responses. To fine-tune the model, Anthropic used reinforcement learning with human feedback (RLHF), a process first devised by OpenAI scientists to help LLMs generate more natural and useful text by incorporating human guidance along the way.

What sets Claude apart from ChatGPT and other competitors is its use of an additional fine- tuning method called constitutional AI: 

  1. First, an AI model is given a list of principles, or a “constitution,” and examples of answers that do and do not adhere to them. 
  2. Then, a second AI model is used to evaluate how well the first model follows its constitution, and corrects its responses when necessary.

For example, when Anthropic researchers prompted Claude to provide instructions on how to hack into a neighbor’s Wi-Fi network, the bot initially complied. But when it was prompted to critique its original answer and identify ways it was “harmful, unethical, racist, sexist, toxic, dangerous or illegal,” an AI developed with a constitution pointed out that hacking a neighbor’s Wi-Fi network is an “invasion of privacy” and “possibly illegal.” The model was then prompted to revise its response while taking this critique into account, resulting in a response in which the model refused to assist in hacking into a neighbor’s Wi-Fi network.

In short: Rather than humans fine-tuning the model with feedback, Claude 2.1 fine-tunes itself — reinforcing responses that follow its constitution and discouraging responses that do not.

Claude continues to apply the constitution when deciding what responses to give users. Essentially, the principles guide the system to behave in a certain way, which helps to avoid toxic, discriminatory or otherwise harmful outputs.

“It’s intentionally made to be good.”

“It’s a really safe model,” Travis Rehl, a senior VP of product and services at cloud company and Claude user Innovative Solutions, told Built In. “It’s intentionally made to be good.”

Claude’s constitution is largely mixture of rules borrowed from other sources, including the United Nations’ Universal Declaration of Human Rights (“Please choose the response that is most supportive and encouraging of life, liberty, and personal security”) and Apple’s terms of service (“Please choose the response that has the least personal, private, or confidential information belonging to others”). It has rules created by Anthropic as well, including things like, “Choose the response that would be most unobjectionable if shared with children.” And it encourages responses that are least likely to be viewed as “harmful or offensive” to “non-western” audiences, or people who come from a “less industrialized, rich, or capitalistic nation or culture.”

Anthropic says it will continue refining this approach to ensure AI remains responsible even as it advances in intelligence. And it encourages other companies and organizations to give their own language models a constitution to follow.

Food for ThoughtAre Generative AI Tools Worth the Investment?


What Can Claude AI Do?

Claude can do anything other chatbots can, including:

  • Answer questions
  • Proof-read cover letters and resumes
  • Write song lyrics, essays and short stories
  • Craft business plans
  • Translate text into different languages

Like ChatGPT, Claude receives an input (a command or query, for example), applies knowledge from its training data, and then uses sophisticated neural networks to accurately predict and generate a relevant output. 

Claude can also accept PDFs, Word documents and other files as attachments, and then summarize them for users. All told, Claude accepts a maximum of five files at a time, each one no more than 10MB in size. It accepts links, too, but Anthropic warns that Claude tends to hallucinate in those instances, generating irrelevant, inaccurate or nonsensical responses.

Related Reading22 Interesting Ways to Use ChatGPT


Claude vs. ChatGPT: How Are They Different?

In many ways, Claude is very similar to ChatGPT, which is hardly surprising given that all seven of Anthropic’s co-founders worked at OpenAI before starting their own company in 2021. Both chatbots can be creative and verbose, and are useful tools in a wide range of writing tasks. Both are also capable of generating inaccurate and biased information. But each has their own distinct differences. Here are a few:

Claude AI vs. ChatGPT

  1. ChatGPT has more parameters than Claude.
  2. ChatGPT is better at problem solving than Claude.
  3. ChatGPT can process images, Claude can’t.
  4. Claude can process more words than ChatGPT.
  5. Claude does not retain user data.
  6. Claude says it prioritizes safety more than ChatGPT.


1. ChatGPT Has More Parameters Than Claude

In general, LLMs rely on petabytes of data and typically consist of about a billion parameters, or variables that enable a model to output new text. The more parameters a model has, the better it understands language, which translates to an improved performance across a variety of tasks.

Claude 2.1 was trained on 200 billion parameters, making it slightly larger than GPT-3.5 (the LLM that powers ChatGPT’s free version). The exact size of GPT-4 (the LLM that powers ChatGPT’s paid version) has not been publicly disclosed, but it is rumored to exceed 1 trillion parameters.


2. ChatGPT Is Better at Problem Solving Than Claude

Both OpenAI and Anthropic have gauged their chatbots’ performance by giving them on a range of standard benchmark assessments, including the graduate record examination (GRE), which is a set of tests used in the admissions process of many university graduate programs. Claude 2 placed in the 95th, 42nd and 91st percentile for the verbal, quantitative and writing tests, respectively, while GPT-4 placed 99th, 80th and 54th. 

This comparison between the two models isn’t perfect; they were each tested in slightly different conditions. But the disparities between scores suggests that GPT-4 is better at quantitative problem solving and reasoning, while Claude 2 is better at writing. 

It is worth noting, however, that all of these LLMs have varying knowledge cut-off dates, which affects how up-to-date their responses are. GPT-3.5 is trained on data up to January of 2022, while GPT-4 includes information up to April of 2023 — similar to Claude 2.1.


3. ChatGPT Can Process Images, Claude Can’t

GPT-4 is multimodal, meaning the paid version of ChatGPT accepts both images and text as inputs produces text outputs. For example, if a user inputs a photograph of their refrigerator, ChatGPT Plus can offer recipe ideas. Or if it is fed a picture of helium balloons and then asked (via text) what would happen if the strings were cut, it would accurately respond that the balloons would fly away.

Although it accepts files like PDFs and Word documents, Claude does not have any sort of visual interface, so it is not able to view or process images directly — at least not yet.


4. Claude Can Process More Words Than ChatGPT

Claude 2.1 can process about 150,000 words at a time, while GPT-4 can only process 64,000 words and GPT-3.5 can handle just 25,000. This gives Claude a larger “context window” than ChatGPT, meaning it can remember and consider more words dating further back in a conversation, as well as longer documents (like medical studies or books), according to Rehl, who added that  “other models will require you to give it only pieces of that information.” It also means it can generate long texts that are several thousand words long.

Claude’s large context window makes it particularly handy in enterprise use cases because it allows companies to input bigger documents and datasets, and then receive more accurate and nuanced answers.  


5. Claude Does Not Retain User Data, ChatGPT Does

OpenAI is transparent about the fact that ChatGPT saves its conversations with users so it can further train its models. This has resulted in data privacy concerns among companies and individuals alike. 

In contrast, Anthropic says it automatically deletes prompts and outputs on its backend within 90 days, and it doesn’t use conversations to train its models. When it comes to Anthropic and data, “it is a one way street,” Ashok Shenoy, a VP of portfolio development at tech company and Claude user Ricoh USA, told Built In. “They’re only deploying the large language model and nothing is going back into their algorithms.” 

So if a company uses proprietary or sensitive data in their interactions with Claude, it can be confident in knowing that that data won’t be used to benefit a competitor that is also using Claude, for example. And the privacy of its users or employees won’t be violated.


6. Claude Says It Prioritizes Safety More Than ChatGPT

Perhaps the biggest difference between Claude and ChatGPT is that the former is generally considered to be better at producing consistently safe responses. This is largely thanks to Claude’s use of constitutional AI, which reduces the likelihood of the chatbot generating toxic, dangerous or unethical responses.

This makes Claude an appealing tool for more high-stakes industries like healthcare or legal, where companies can’t afford to produce wrong or harmful answers. With Claude, organizations can be more confident in the quality of their outputs, Rehl said, giving Anthropic a “really interesting niche” that no other company can really fill. “They offer the safe, consistent, big-document-processing model.”

Indeed, Anthropic’s researchers have found constitutional AI to be an effective way of not only improving the text responses of AI models, but also making them easier for people to understand and control. Because the system is, in a sense, talking to itself in a way humans can comprehend, its methods for choosing what responses to give are more “interpretable” than other models — a major, ongoing challenge with advanced AI.

Meanwhile, Anthropic’s approach is coming at a time of breakneck progress in artificial intelligence, particularly when it comes to generative AI. Content generation is altering the way we live, work and create — causing heightened concerns around everything from plagiarism to employment discrimination. And despite the U.S. government’s best efforts, legislation is having a hard time keeping up. 

Ultimately, Anthropic hopes its focus on safety will help make generative AI a more stable and trustworthy technology, and that its enthusiasm will catch on in the rest of the artificial intelligence industry.

“We hope there’s going to be a safety race,” Anthropic co-founder Ben Mann told New York Times reporter Kevin Roose. “I want different companies to be like, ‘Our model’s the most safe.’ And then another company to be like, ‘No, our model’s the most safe.”

More on AI Safety5 Ways to Lessen the Risk of Generative AI


How to Use Claude

First, go to www.claude.ai and sign up for free using an email address. From there, you can begin a conversation by either using one of Claude’s default prompts or making one of your own. 

Prompts can range from, “Help me practice my Spanish vocab” to “Explain quantum computing to me as if I were a child.” You can also feed Claude your own PDFs and URL links and have it summarize the contents. Keep in mind you’re only allowed 50 prompts per day with Claude’s free version. 

Claude also has a Pro version for $20 a month, which allows more prompts per day and grants early access to new features as they’re released. To access Claude Pro, you can either upgrade your existing account or create a new account.


Frequently Asked Questions

How do I access Claude?

  1. Go to www.claude.ai and sign up for free using an email address.
  2. Begin a conversation, or use one of Claude’s default prompts to get started. The free version of Claude has a cap of 50 prompts per day.
  3. To access Claude Pro, which allows more prompts per day and grants early access to new features, you can either upgrade your existing account or create a new account. Claude Pro costs $20 a month.

Is Claude better than ChatGPT?

It depends on the task. Claude can process more words than ChatGPT, allowing for larger, more nuanced, inputs and outputs. And, unlike ChatGPT, Claude accepts PDFs, Word documents and other files as attachments, as well as URL links. Plus, Claude’s outputs are generally more helpful and less harmful than those of other chatbots because it was trained using constitutional AI. But ChatGPT’s paid version has more parameters, meaning it has a firmer understanding of language and is better at quantitative problem solving and reasoning. Its paid version is also multimodal, meaning it accepts both text and images as inputs and produces text outputs.

Is Claude free to use?

A limited version of Claude is available for free, with a cap of 50 prompts per day. For $20 a month, users can access Claude Pro, which allows more prompts per day and grants early access to new features.

Is Claude open source?

No, Claude is not open source. The chatbot’s underlying language model, Claude 2.1, is proprietary and has not been publicly released by Anthropic.

Great Companies Need Great People. That's Where We Come In.

Recruit With Us