AI Detection: What It Is, How It Works, Top Tools to Know

We break down the tools that use AI to spot AI.

Written by Ellen Glover
Published on Aug. 20, 2024
Illustration of a magnifying glass hovering over icons of an image and a document.
Image: Shutterstock

With the rise of generative AI, it’s easy and inexpensive to make highly convincing fabricated content. Today, artificial content and image generators, as well as deepfake technology, are used in all kinds of ways — from students taking shortcuts on their homework to fraudsters disseminating false information about wars, political elections and natural disasters.

This has led to the emergence of a new field known as AI detection, which focuses on differentiating between human-made and machine-produced creations.

Top AI Detection Tools

  • Originality.ai
  • GPTZero
  • Hive
  • Winston AI
  • Copyleaks

“We’re in a new world now. And, unfortunately, it turns out that humans aren’t very well suited to be able to detect this stuff,” Kevin Guo, co-founder and CEO of AI content moderation and detection company Hive, told Built In. “The only way to really fix this problem at scale is, ironically, with AI.”

Related ReadingAI Is Accelerating the Post-Trust Era

 

What Is AI Detection?

AI detection is the process of identifying whether a piece of content (text, images, videos or audio) was created using artificial intelligence. Educators use it to verify students’ essays, online moderators use it to identify and remove spam content on social media platforms, and journalists use it to verify the authenticity of media and mitigate the spread of fake news.

AI detection often requires the use of AI-powered software that analyzes various patterns and clues in the content — such as specific writing styles and visual anomalies — that indicate whether a piece is the result of generative AI or not.

Essentially, these tools use artificial intelligence to detect instances of AI involvement, Alex Cui, co-founder and CTO of AI detection company GPTZero, told Built In. “It’s AIs trying to judge other AIs.”

 

5 AI Detection Tools to Know

These are some of the top AI detection tools available today.

1. Originality.ai

Originality.ai’s AI text detection services are intended for writers, marketers and publishers. The tool has three modes — Lite, Standard and Turbo — which have different success rates, depending on the task at hand. Lite is 98 percent accurate, according to the company, and is meant for users that allow AI editing; Standard, the default mode, also allows for some AI use, but is slightly more accurate than Lite; and Turbo is for users that have a zero tolerance for AI. Originality.ai works with just about all of the top language models on the market today, including GPT-4, Gemini, Claude and Llama

Once the input text is scanned, users are given an overall percentage of what it perceives as human-made and AI-generated content, along with sentence-level highlights. Originality.ai also offers a plagiarism checker, a fact checker and readability analysis.

2. Hive

Hive offers free AI detection tools for text, images, videos and audio. Its tool can identify content made with several popular generative AI engines, including ChatGPT, DALL-E, Midjourney and Stable Diffusion.

Once the user inputs media, the tool scans it and provides an overall score of the likelihood that it is AI-generated, along with a breakdown of what AI model likely created it. In addition to its AI detection tool, Hive also offers various moderation tools for text, audio and visuals, allowing platforms to flag and remove spam and otherwise harmful posts.

3. Winston AI

Winston AI’s AI text detector is designed to be used by educators, publishers and enterprises. It works with all of the main language models, including GPT-4, Gemini, Llama and Claude, achieving up to 99.98 percent accuracy, according to the company. It can even identify paraphrased content created by writing assistants. The tool also works in multiple languages beyond English.

After a user inputs media, Winston AI breaks down the probability the text is AI-generated and highlights the sentences it suspects were written with AI. It also provides a readability score and a plagiarism checker.

4. GPTZero

GPTZero is an AI text detector that offers solutions for teachers, writers, cybersecurity professionals and recruiters. Among other things, the tool analyzes “burstiness” and “perplexity.” Burstiness is a measurement of variation in sentence structure and length, and perplexity is a measurement of how unpredictable the text is. Both variables are key in distinguishing between human-made text and AI-generated text.

After it’s done scanning the input media, GPTZero classifies the document as either AI-generated or human-made, with a sliding scale showing how much consists of each. Additional details are provided based on the level of scan requested, ranging from basic sentence breakdowns to color-coded highlights corresponding to specific language models (GPT-4, Gemini, etc.). Users can also get a detailed breakdown of their piece’s readability, simplicity and average sentence.

5. Copyleaks

Copyleaks’ AI text detector is trained to recognize human writing patterns, and only flags material as potentially AI-generated when it detects deviations from these patterns. It can even spot AI-generated text when it is mixed in with human writing, achieving more than 99 percent accuracy, according to the company. The tool supports more than 30 languages and covers AI models like GPT-4, Gemini and Claude, as well as newer models as they’re released.

Copyleaks also offers a separate tool for identifying AI-generated code, as well as plagiarized and modified code, which can help mitigate potential licensing and copyright infringement risks. Plus, the company says this tool helps protect users’ proprietary code, alerting them of any potential infringements or leaks.

Related ReadingHow to Defend Against Deepfakes

 

How Are AI Detection Tools Being Used?

Just as generative AI is widely used across many areas of life, so too is AI detection. These tools are used to help verify the authenticity of content across a variety of fields:

  • Education: Teachers use AI text detectors to check the originality of their students’ homework, verifying that their essays and other written assignments were completed by them and not a text generator. 
  • Social Media: Online moderators use AI detection tools to identify and filter out deepfake videos, fabricated images and misleading, AI-generated articles in an effort to maintain credibility and trustworthiness on their platforms.
  • Journalism: Journalists use AI detection tools to help verify the authenticity of images, videos and other news articles in an effort to stop the spread of misinformation. 
  • Cybersecurity: AI detection tools help cybersecurity professionals identify and counteract phishing campaigns and similar threats that pose a data security risk.
  • Insurance: Insurance companies use AI detection tools to identify fraudulent claims that rely on artificially doctored images, helping to ensure they don't give out checks for accidents that never happened.

More on Artificial IntelligenceExplore Built In’s AI Coverage

 

How Does AI Detection Work?

At a high level, AI detection involves training a machine learning model on millions of examples of both human- and AI-generated content, which the model analyzes for patterns that help it to distinguish one from the other. The exact process looks a bit different depending on the specific tool that’s used and what sort of content — text, visual media or audio — is being analyzed.

Text Detection

Tools that identify AI-generated text are usually built on large language models, similar to those used in the content generators they’re trying to spot. They examine a piece’s word choices, voice, grammar and other stylistic features, and compare it to known characteristics of human and AI-written text to make a determination.

Generally, AI text generators tend to follow a “cookie cutter structure,” according to Cui, formatting their content as a simple introduction, body and conclusion, or a series of bullet points. He and his team at GPTZero have also noted several words and phrases LLMs used often, including “certainly,” “emphasizing the significance of” and “plays a crucial role in shaping” — the presence of which can be an indicator that AI was involved. 

Image Detection

Every picture an AI image generator makes is packed with millions of pixels, each containing clues about how it was made. Image detectors closely analyze these pixels, picking up on things like color patterns and sharpness, and then flagging any anomalies that aren’t typically present in real images — even the ones that are too subtle for the human eye to see.

These tools don’t interpret or process what’s actually depicted in the images themselves, such as faces, objects or scenes. They don’t take an image’s subject matter into account when determining whether or not it was created using AI. Rather, they only focus on an image’s technical aspects.

Audio Detection

AI audio detection tools listen very differently than humans do. Instead of focusing on the content of what is being said, they analyze speech flow, vocal tones and breathing patterns in a given recording, as well as background noise and other acoustic anomalies beyond just the voice itself. All of these factors can be helpful cues in determining whether an audio clip is authentic, manipulated or completely AI-generated. 

Video Detection

Like image detectors, video detectors look at subtle visual details to determine whether or not something was generated with AI. But they also assess the temporal sequence of frames, analyzing the way motion transitions occur over time. Detectors often analyze the audio track for signs of altered or synthetic speech, too, including abnormalities in voice patterns and background noise. Unusual facial movements, sudden changes in video quality and mismatched audio-visual synchronizations are all telltale signs that a clip was made using an AI video generator.

 

How Accurate Are AI Detection Tools?

While AI detection has been heralded by many as one way to mitigate the harms of AI-fueled misinformation and fraud, it is still a relatively new field, so results aren’t always accurate. These tools might not catch every instance of AI-generated material, and may produce false positives.

“They’re good but they’re not perfect,” Cui said. “There’s still places where they can go wrong.”

Accuracy rates for AI detection tools can be as high as 98 percent and as low as 50 percent, according to one paper published by researchers at the University of Chicago’s Department of Computer Science.

Because of how AI detectors work, they can never guarantee a 100 percent accuracy. Factors like training data quality and the type of content being analyzed can significantly influence the performance of a given AI detection tool. 

“It’s really hard to get [them] to work well,” Guo said. “Generally speaking, these models tend to have a high error rate, both in terms of false positives — it thinks a piece of content is AI-generated when it’s really not — or it doesn’t flag something that is AI-generated.”

Both mistakes carry significant risks. If content created by a human is falsely flagged as AI-generated, it can seriously damage a person’s reputation and career, causing them to get kicked out of school or lose work opportunities. And if a tool mistakes AI-generated material as real, it can go completely unchecked, potentially allowing misleading or otherwise harmful information to spread.

“You cannot completely rely on these tools.”

“You cannot completely rely on these tools, especially for sensitive applications,” Vinu Sankar Sadasivan, a computer science PhD student at the University of Maryland, told Built In. He has co-authored several papers highlighting the inaccuracies of AI detectors. “We should be really careful when deploying them in practice,” he said.

And as artificial intelligence becomes more adept at mimicking humans’ creative abilities, accurately flagging AI-generated content will only grow more challenging — “it’s kind of a never-ending battle,” Guo said, “and each side pushes the other.”

Because of this, many experts argue that AI detection tools alone are not enough. Techniques like AI watermarking are gaining popularity, providing an additional layer of protection by having creators to automatically label their content as AI-generated. But Sadasivan warns that no method will ever be perfect. 

“Watermarking and other techniques are nice to employ, it’s always nice to have them as a first layer of security. But we should always be aware of the pitfalls and the disadvantages of these techniques,” he explained. “Whenever an adversary or an attacker wants to evade these detection techniques, it’s always possible.”

Frequently Asked Questions

AI detection tools work by analyzing various types of content (text, images, videos, audio) for signs that it was created or altered using artificial intelligence. Using AI models trained on large datasets of both real and AI-generated material, they compare a given piece of content against known AI patterns, noting any anomalies and inconsistencies.

The accuracy of AI detection tools varies widely, with some tools successfully differentiating between real and AI-generated content nearly 100 percent of the time and others struggling to tell the two apart. Factors like training data quality and the type of content being analyzed can significantly influence the accuracy of a given AI detection tool.

Explore Job Matches.