ChatGPT is a chatbot that uses elements of artificial intelligence to communicate with people in a human-like way, as well as generate unique text. It can answer questions, compose essays, offer advice and much more. At its most basic level, ChatGPT is simply a form of generative AI, meaning it can automatically produce written content. It is also a form of conversational AI, so it is capable of carrying on a fluent, natural dialogue with humans.
“It allows us to talk to AI, and it allows AI to talk back to us,” Jeff Kagan, a tech industry analyst, told Built In. “It’s got the power to do a sort of computer version of thinking.”
AI research lab OpenAI launched ChatGPT in November of 2022, and it was a near-overnight success, reportedly reaching 100 million users in just a couple months.
What Is ChatGPT?
ChatGPT is a chatbot that uses elements of AI to communicate with users in a natural, human-like way, as well as generate unique content. It can answer questions, compose essays, offer advice and much more.
How Does ChatGPT Work?
ChatGPT is powered by a large language model, or LLM, which comprises neural networks that predict word sequences, generating sentences similar to how a human would write or speak them. The LLM manages to do this after being trained on a large corpora of data — written content on the internet that can include everything from Wikipedia articles to research papers.
In practice, LLM models give the probability of a certain word or sequence as being “valid,” meaning it resembles how people write, which is what the language model learns, wrote Built In expert contributor and data scientist Mór Kapronczay.
Sequences generated don’t necessarily have to be language, as it is with ChatGPT. They can also be numerical sequences, or time series. “It just happens to be that language is so ubiquitous that it’s the most useful type of sequence to predict,” Sam Stone, the director of product management, pricing and data products at real estate tech firm Opendoor, told Built In.
Often deployed through a long short-term memory model, or LSTM, the LLM takes a sequence of words a user gives it, such as a half-completed sentence, and fills in the blanks with the most statistically probable word given the surrounding context. This happens iteratively, building from words to sentences, to paragraphs, to pages of text.
How to Use ChatGPT
- Visit ChatGPT’s website at https://chat.openai.com/auth/login.
- If ChatGPT is at capacity, put in your email address to get notified when there is more space.
- You can also register to be a ChatGPT Plus subscriber ($20/month), which, among other things, allows you to access ChatGPT even during peak times when the server is at capacity.
- Before using ChatGPT, you must accept the terms of service.
- Once you’re in ChatGPT, just start writing.
In order to sift through the several terabytes of data across the internet and transform that into text, ChatGPT uses a technique called transformer architecture — hence, the “T” in its name. “GPT” is short for general pre-trained transformer, which is a language model that uses deep learning to generate natural, human-like text based on a given text input.
The language models used in ChatGPT are specifically optimized for dialogue and were trained using a novel approach called reinforcement learning from human feedback, or RLHF, which incorporates human feedback into the training process so it can better align its outputs with user intent. “It actually integrates and systematizes humans’ subjective judgment into the model training process,” Stone said. This is used to not only help the model determine the best output, but it also helps improve the training process, enabling it to answer questions more effectively.
“The core focus of training and the reinforcement learning for ChatGPT is towards generating more human-like, verbose text,” Raghu Ravinutala, the co-founder and CEO of customer experience startup Yellow.ai, told Built In. He added that it is “by far” one of the largest language models available today.
While the inner workings of ChatGPT may sound complicated, it is this complexity that allows the chatbot to produce such human-like text.
The Making of ChatGPT
ChatGPT is the brainchild of OpenAI, an artificial intelligence research lab co-founded by billionaire business mogul Elon Musk and former Y Combinator President Sam Altman, along with a handful of other San Francisco-based entrepreneurs, in 2015. Billionaire entrepreneur and venture capitalist Peter Thiel, LinkedIn co-founder Reid Hoffman, Microsoft and Amazon Web Services were also among the company’s earliest investors.
At the time, OpenAI was a non-profit focused on developing artificial intelligence “in the way that is more likely to benefit humanity as whole, unconstrained by a need to generate financial return,” according to a statement from 2015. While OpenAI still operates a non-profit arm, it officially became a “capped profit” corporation in 2019. It is currently valued at $29 billion, per recent reporting by the Wall Street Journal, making it one of the most valuable companies in the United States right now.
ChatGPT Fast Facts
- What does GPT stand for? GPT stands for general pre-trained transformer, which is a language model that uses deep learning to generate natural, human-like text based on a given text input.
- When did ChatGPT launch? ChatGPT launched on November 30, 2022.
- Who created ChatGPT? ChatGPT was created by OpenAI, a San Francisco-based artificial intelligence research lab.
- Is ChatGPT free? ChatGPT is free to use. As of February 1, there is also a paid subscription version called ChatGPT Plus, which costs $20 a month.
Prior to ChatGPT, OpenAI launched several products, including automatic speech recognition software Whisper, and DALL-E, an AI art generator that can produce images based on text prompts. The company also created several predecessors to ChatGPT. Its first public stab at a chatbot was GPT-2, which was so good at writing fake news that the company initially decided not to release it. Then it released GPT-3 in 2020, which, like ChatGPT, created a lot of buzz with its ability to generate natural language.
When ChatGPT finally came on the scene in 2022, it didn’t take long for it to go viral. Altman, who is now OpenAI’s CEO, tweeted that it had more than 1 million users in the first five days after its roll out. Within a matter of weeks, the internet was flooded with people’s experiments, projects and conversations with the model. It was being used for mental health advice, to write prose in the style of Ernest Hemingway and even do rocket science (it wasn’t very good at that, though).
Over the months, the chatbot’s ability to produce cogent text responses, has pushed the limits of what was previously thought possible with artificial intelligence. Musk, who left OpenAI in 2019, called ChaptGPT “scary good.”
There are many ways in which ChatGPT is groundbreaking. Unlike most other chatbots, it can carry a conversation fluidly between prompts, making its interactions with users much more natural.
Benefits of ChatGPT
- ChatGPT is capable of carrying a natural, fluid conversation with users, meaning a user doesn’t have to repeat themselves.
- ChatGPT doesn’t require users to plug in examples of what they want before receiving an answer, making it more user-friendly than other chatbots.
- Users can plug in long, complex text, and ChatGPT can understand it.
“Most preceding models were kind of single interaction interfaces, so if you wanted to refer back to something you had said previously, you had to restate that thing. That’s kind of unnatural for most humans,” Stone said. “The technical challenge here was, I think, relatively easy in terms of persisting or flowing the past conversation back into each new question-answer iteration.”
ChatGPT is also “much better” at doing “zero-shot learning,” he continued, meaning users don’t need to provide examples for the model to understand the kind of output they’re looking for. This makes ChatGPT much more user-friendly than other chatbots.
“With ChatGPT, you can just ask the question, and the model will understand the format, the nature, and the type of response you’re looking for.”
“Earlier models before ChatGPT, you go back a year or two, they might need four or five examples to understand, ‘Oh, this is the answer you’re looking to get back’,” Stone said. And older models than that would have needed hundreds or even thousands of examples. But, “with ChatGPT, you can just ask the question, and the model will understand the format, the nature, and the type of response you’re looking for.”
ChatGPT’s ‘Reading’ Ability
Of course, one of the most talked about strengths of ChatGPT is its writing ability. But it’s important to note that the bot is also incredibly good at “reading.” Or, at the very least, understanding the text being fed to it by users.
This is not only impressive, but it is also useful to companies looking to use this technology. A lot of the time, the outputs these organizations are looking for are relatively simple — they want to classify something, or take a specific action, Stone explained.
“The really complicated part is understanding the real-world history of, say, a customer, or [an] employee. And that’s where I think it has power as a reader,” he continued. “You can give it multiple documents, maybe even really long documents, and it will instantly be able to read all of those and reason about them, and then tie that back to the action. And then express the action we want to take in the form of writing.”
That being said, ChatGPT isn’t perfect. It still has quite a few limitations.
For one, the data on which its base model was trained is quite limited. ChatGPT uses data from the internet to help generate its responses, but it does not have the ability to search the internet for new information, and it does not update in real time. The base model was trained on data from 2021 and earlier, meaning it does not have the awareness of events or news that have occurred since then.
And not everything worth knowing exists online. In fact, most of the questions or problems that are posed to ChatGPT, particularly when it is being used for business reasons, relate to information that is specific to that organization, more recent than 2021, and often doesn’t exist on the open internet.
“How do we plug this into a [customer relationship management tool] so that we can use it to actually assist customers on issues that are just a few hours or even minutes old? That’s pretty uncharted territory,” Stone said.
- Only trained on data from the open internet up to 2021, meaning it does not know more recent information, or internal company information.
- Tendencies to hallucinate, or generate content that is stylistically correct but factually wrong.
- Does not publicize its sources, so there is no way to know the origin of the content generated.
- High likelihood of biases (both subtle and unsubtle) leaking into the model, affecting its outputs.
Its reliance on data found online also makes ChatGPT vulnerable to false information, which in turn can impact the veracity of its statements. This often leads to what experts call “hallucinations,” where the output generated is stylistically correct, but factually wrong. Hallucinations can become a huge issue if ChatGPT is being used to, say, help a company decide what specific offerings a customer is eligible for, or write a news article. Or if someone is using the chatbot to ask questions about historical events, or their own health.
Instead of asking for clarification on an ambiguous question, or saying that it doesn’t know the answer, ChatGPT will just take a guess at what the question means and what the answer should be. And, because the model is able to produce incorrect information in such an eloquent way, the fallacies are hard to spot and control.
ChatGPT Is ‘Still An Experimentation’
This has led to quite a bit of backlash against the chatbot, particularly when it is used as a source of news or advice. Popular question-and-answer site Stack Overflow has even gone so far as to temporarily ban ChatGPT-generated responses, claiming the model’s tendency to get things wrong is “substantially harmful to the site and to users who are asking and looking for correct answers.”
“It’s still an experimentation, you cannot 100 percent rely on ChatGPT’s results,” Yellow.ai’s Ravinutala said. “They also don’t publicize their sources, so you don’t know the sources of the content generated.”
“It’s still an experimentation, you cannot 100 percent rely on ChatGPT’s results.”
This may not be an issue for long, though. Conversational AI startup Got It AI recently announced it has developed a new tool to identify and address ChatGPT hallucinations for enterprise applications. Other detectors for flagging whether content is AI generated have cropped up, too, but none of them are particularly effective. OpenAI even offers one of its own, but it has been shown to only correctly identify 26 percent of AI-written text.
ChatGPT also produces biased results. Most people know that, just because something is on the internet, that doesn’t make it true. Racism, sexism and all manner of prejudices run rampant online, and it is up to the individual to decide how much weight to give it. ChatGPT doesn’t have that ability. So, despite the guardrails OpenAI has put in place to prevent it, the chatbot still has a tendency to let biases (both subtle and unsubtle) creep into its outputs.
“There are things that have existed in the past that these statistically oriented models will then pick up on, but we don’t want to project those associations into the future. It’s especially dangerous if we don’t even know what those associations are,” Stone said. “We’ve got to be really careful.”
ChatGPT Use Cases
Despite its limitations, ChatGPT has proven to be not only cool, but quite useful. And with rumors of a $10 billion investment from Microsoft swirling, OpenAI’s chatbot stands to be even more disruptive.
Beyond the big headlines and philosophical debates, ChatGPT is actually quite practical, particularly in business applications. And it has affected how everyday people experience the internet in “profound ways,” Ravinutala said. This extends to areas like cloud computing and enterprise software as well. “We’ve already seen it. And I think we are in for much bigger things as this technology develops and gets adopted.”
5 ChatGPT Use Cases
- Content creation
- Customer service
- Real estate
ChatGPT is one of many AI content generators tackling the art of the written word — whether that be a news article, a press release, a social media post or a sales email. All a user has to do is hop on ChatGPT or some other platform, and type in a quick prompt. If they want to create a blog post about the health benefits of sweet potatoes, they just need to type in “Write an article about the benefits of sweet potatoes.” The model will then generate a draft that the user can edit and refine as needed.
ChatGPT can be used for other writing tasks beyond just content creation, too. It can translate a piece of text into different languages, summarize several pages of text into a paragraph, finish a partially complete sentence, generate dialogue and more. It can also be fine-tuned for specific use cases such as legal documents or medical records, where the model is trained on domain-specific data.
Of course, ChatGPT’s impressive writing abilities have not gone without some controversy. Teachers are concerned that students will use it to cheat, prompting some schools to completely block access to it. And professional writers across a variety of industries are worried ChatGPT and other AI writers could take their jobs. Opendoor’s Stone doesn’t think this is likely, though.
“When technology makes people more productive, more people tend to be employed,” he said, likening it to what the invention of the personal computer or the internet did for the productivity of office workers. Technology like ChatGPT will serve as a resource, not a replacement, for a lot of professionals. “We’ll use language models to help us write first drafts, to brainstorm. But then the value of domain experts will continue to refine that and make it better than whatever a model can produce.”
ChatGPT and other conversational AI models have generated a lot of buzz in the customer services space, offering a way to automate responses to customer queries as opposed to relying on a human agent. Ravinutala said large language models like ChatGPT can be used by customer experience companies to automate customer service interactions, allowing companies to better understand user intent and respond accordingly. He added that Yellow.ai’s sales team has already begun using ChatGPT to compose emails to customers, with humans making minor edits when needed.
The use of AI in customer service not only saves money, but it also addresses the ongoing labor shortage in this industry, which has worsened in the wake of the Covid-19 pandemic.
Again, the knowledge of ChatGPT’s base model stops at the data that exists on the open internet, so it cannot necessarily be used to, for example, solve a specific billing issue between a cable company and a customer. Still, ChatGPT and other conversational AI stands to disrupt the customer service industry. And since customer service applies to so many companies, this disruption extends to many other industries as well.
ChatGPT has become a hot subject in the healthcare industry for both patients and professionals alike.
The chatbot has garnered quite a bit of attention for its ability to successfully diagnose medical conditions, but that’s not where its impact ends. It also has the potential to automate a lot of the daily tasks in the medical field, from generating and summarizing patient records to scheduling appointments. Clinical decision support systems, or CDSs, are a longstanding fixture in the healthcare system to help medical professionals make decisions about patient care, and enhancing them with AI could improve them.
Doctors are also plugging their patients’ symptoms into ChatGPT to get a differential diagnosis — a list of possible conditions related to presenting symptoms, Harvey Castro, an emergency physician who advises digital health companies on how best to integrate ChatGPT, recently told CBS News. “It’s a supplement.”
When you look at a real estate listing, it’s bound to have a few essentials — square footage, price, the number of bathrooms, to name a few. It is also likely accompanied by a paragraphs-long description of the property. ChatGPT and other chatbots are good at writing those descriptions up, saving the agents who originally have to do it a lot of time.
It is also good at helping to handle the administrative tasks of real estate. The industry is dictated by a lot of laws and policies, requiring a lot of paperwork. ChatGPT can be used in addition to an internal search engine, allowing agents to easily find the document they need and the correct information within that document pertaining to a specific question or quandary, according to Stone.
“Large language models have the potential to totally change the paradigm here so that, when you go into an internal search engine and ask a question, you can just get a direct answer,” Stone said. “Now, that answer has to be correct. And you need to be able to trace it back to where it came from, but it can save you a ton of time and the potential of looking at the wrong document.”
When it comes to programming and coding, ChatGPT has completely changed the game. Not only can it generate working computer code of its own (in many different languages), but it can also translate code from one language to another, and debug existing code. By virtue of its training, ChatGPT has read countless more documentations than any one individual programmer could ever see, which is why it can write code in a matter of seconds, as well as provide step-by-step explanations as it does it.
Some developers were so excited by ChatGPT’s capabilities that they used it to actually create their own apps, including a spreadsheet assistant capable of performing complex calculations in response to a simple request.
At the same time though, there are rising concerns about the potential of bad actors to use ChatGPT’s powers to generate malware, or software that is intentionally designed to disrupt, damage or gain unauthorized access to a computer system. Although OpenAI’s terms of service specifically ban the generation of malware, spam and other forms of targeted cybercrime, it’s become evident that efforts are already underway to develop malicious software using ChatGPT — posing a major cybersecurity threat.
“We expect to see hackers get a much better handle on how to use ChatGPT successfully for nefarious purposes; whether as a tool to write better mutable malware or as an enabler to bolster their ‘skill set,’” Shishir Singh, the CTO of BlackBerry Cybersecurity, said in a recent statement. “As the maturity of the platform and the hackers’ experience of putting it to use progresses, it will get more and more difficult to defend without also using AI in defense to level the playing field.”
What Lies Ahead for ChatGPT?
Meanwhile, the ChatGPT disruption on everyone’s minds right now is the suggestion that it could transform web search, potentially dethroning Google.
Microsoft’s bullish support of OpenAI as of late is certainly stoking the flames. The tech giant’s varied product offerings provide several opportunities to take on the likes of Google. In fact, Microsoft already uses GPT-3.5 (the technology behind ChatGPT) to auto-generate snippets of code in its Visual Studio product. And it just recently folded OpenAI’s newest language model into its web browser Edge and search engine Bing, which has long played second banana to Google. Taking “key learnings and advancements” from ChatGPT and GPT-3.5, the new and improved Bing is “even faster, more accurate and more capable,” than ChatGPT, according to Microsoft.
But Google isn’t going down without a fight. Just one day before Microsoft made its Bing announcement, the company introduced its own ChatGPT rival, Bard, upping the ante in the ongoing conversational AI arms race. Google hasn’t offered a lot of detail about how Bard will be integrated into its longstanding search engine, and, for now, it is only available to a select group of people. The bot didn’t get off to a great start, though, publishing a factual error in its very first public demo that prompted a significant drop in Google’s stock prices soon after. It’s unlikely that Bard’s shaky beginning spells doom for Google entirely, but of course anything can happen.
“This technology is amazing, but it’s still first generation.”
One thing that is much more certain for ChatGPT is the inevitability of its successor.
“This technology is amazing, but it’s still first generation,” Kagan, the tech industry analyst, said, likening ChatGPT to what the Ford Model T did for cars. “It was a really exciting innovation, but it was nothing compared to what we’re driving today.”
Although ChatGPT is only a few months old, OpenAI recently released GPT-4 — a much-anticipated language model that will be the underlying engine powering ChatGPT going forward. The new model is multimodal, meaning it accepts both images and text as inputs, although it only generates text as an output. For now, the company is only selling access to its text-input capability so that businesses and individuals can build their own applications on top of it.
GPT-4 performs much better than GPT-3.5, which, until recently, was the foundation of ChatGPT. The new model was given a whole battery of professional and academic benchmark tests, and while it was “less capable than humans” in many scenarios, it exhibited “human-level performance” on several of them, according to OpenAI. For instance, GPT-4 managed to score well enough to be within the top 10 percent of test takers in a simulated bar exam, while GPT-3.5’s score was at the bottom 10 percent. OpenAI also claims that GPT-4 is generally more trustworthy than GPT-3.5 — returning more factual answers that stay within the guardrails that prevent biased outputs and other issues.
Yet, like GPT-3.5 and similar systems, GPT-4 remains flawed. It still hallucinates, making up information. And it is still possible to get the model to spit out biased or inappropriate language. It also generally lacks the knowledge of events that have occurred after the majority of its data cuts off (September of 2021), and it does not learn from experience. Like humans, it can make errors performing both simple and difficult tasks.
And there is a lot we do not know about GPT-4. OpenAI has disclosed very little about how big the model is, and is keeping just how much data it has been trained on under wraps, claiming both competitive and safety reasons.
Still, it is safe to assume that with this, and each future iteration of its GPT-series, OpenAI is working toward ultimately achieving artificial general intelligence, where a machine is capable of behaving and performing actions the way humans can. “We are very much here to build AGI,” co-founder and CEO Altman said in an interview with StrictlyVC.