How to Fight AI-Generated Fake News — With AI

Misinformation has harmful effects in society at large, and your business is at risk too. Fortunately, you can use AI technology to stop its spread.

Written by Kalina Cherneva
Published on Nov. 12, 2024
A series of graphics showing AI scams
Image: Shutterstock / Built In

Since events like Brexit and the 2016 U.S. elections, artificial intelligence (AI) has often been criticized for its role in influencing public opinion through psychographic profiling and social media algorithms. These platforms, although designed to promote sharing, can unintentionally amplify sensational content and create echo chambers where misinformation thrives. This phenomenon is particularly evident around politically charged topics such as war, Covid-19, and even everyday matters like environmental concerns and technology. For instance, misinformation linking 5G cell towers to the coronavirus pandemic led to vandalism in the UK.

Fake news can harm your organization, too. For example, false claims about Starbucks offering discounts to undocumented immigrants created a controversy around the brand. The societal polarization caused by manipulated narratives can manifest within your company, affecting its culture, cohesion and productivity.

In this article, I aim to explore how companies can use AI to prevent the spread of fake news within their organizations.

5 Ways to Protect Your Organization From Misinformation

  1. Train an AI fact checker for your organization.
  2. Keep a human in the loop.
  3. Implement media literacy programs for your staff.
  4. Gamify training to boost its effectiveness.
  5. Scale up training programs around high-impact events like elections.

More on Artificial IntelligenceWhat Is Artificial Intelligence (AI)?

 

The Threat of Misinformation to Organizations

Misinformation is akin to a modern-day bank run, wherein individuals might boycott a company on moral grounds as swiftly as they would withdraw money during a financial crisis. Voltaires observation that common sense is not so common is particularly relevant in these scenarios. Even if a company isnt publicly traded or frequently in the news, misleading information can still have a serious impact. This polarization aims to undermine cooperation and trust through a divide-and-conquer approach, with its effects becoming immediately apparent within organizations.

According to a Leadership IQ survey, 59 percent of respondents expressed concern about fake news in the workplace, with 24 percent being very concerned. The survey also highlighted an increase in problem behaviors associated with misinformation, such as criticism, dismissing others’ ideas, blaming, lying and more.

As Dr. Teri Tompkins, a professor at the Graziadio Business School at Pepperdine University, notes, Im worried that when people arent able to trust each other and have confidence that the other person is a nice person, then they won't share information, theyll doubt what the other person says, which will slow decisions down. They will take sides and have biases.

Trust is crucial for organizational strength and employee motivation. Deloitte reports that 80 percent of employees who trust their employers feel motivated, compared to less than 30 percent of those who don’t. Less than half of workers say they trust their employer, however. Trust enhances productivity, reduces turnover, and improves job quality. As work becomes more collaborative and creative, trust and psychological safety are essential for fostering innovation and effective teamwork. Lack of trust can severely hinder an organizations ability to function effectively.

 

Using AI to Detect Misinformation 

Humans naturally gravitate toward sensationalism and scandal. Daniel Kahneman, in his book Thinking, Fast and Slow, explains that humans have two types of cognition: System One, which is fast and intuitive, and System Two, which is slow and deliberate. System One is prone to confirmation bias, making it crucial to question a source whenever an article triggers strong emotions or uses charged language.

A 2018 study by MIT researchers discovered that Twitter users are 70 percent more likely to retweet false news than true news, with false stories spreading about six times faster. The study revealed that the characteristics of the network of people sharing the content — such as the size and influence of the network, the speed at which the content is shared, and the interconnectedness of users — along with the linguistic features of the content, such as sensationalist language, emotional appeal, and specific word choices, were key predictors of its falsity. The researchers’ model was able to accurately identify false versus true content 75 percent of the time.

By recognizing our natural biases and using AI tools, we can more effectively spot and reduce the spread of false information. Therefore, critically evaluating content before sharing it is crucial.

Fortunately, transformer-based approaches, which are the basis of large language models (LLMs), offer even greater accuracy in identifying fake news due to their natural language comprehension capabilities. They can also be further fine-tuned to handle this specific task.

In a 2023 study, researchers enhanced a transformer-based model (BERT) with a Bidirectional Gated Recurrent Unit, achieving an impressive F1 score of 98 percent in detecting false news. The F1 score is a metric that balances precision (how many identified fake news items are actually fake) and recall (how many actual fake news items were correctly identified).This high score indicates the model’s exceptional ability to distinguish false content from real in the data set.

With such AI tools, organizations can more effectively combat misinformation and protect their internal cultures and operations.

 

AI’s Potential Against Fake News

The good news is that several AI-powered tools can help stop the spread of fake news.

LLMs

AI, unlike humans, is not susceptible to sensation or emotion. In Kahneman’s terms, AI operates solely with System Two cognition, free from biases like confirmation bias. This makes AI a valuable tool for verifying the authenticity of information. LLMs have access to vast amounts of online data, enabling them to identify historically debunked claims. 

If the training data is biased, however, the AI might unintentionally learn these biases, affecting its accuracy. Thus, you have to carefully evaluate data and algorithms to ensure AI effectively combats fake news.

Fact Checker

Launched in January 2024, Fact Checker is a specialized GPT available in OpenAI’s GPT store, designed to deliver precise fact-checking capabilities. Following OpenAI’s announcement that custom GPTs can be created without coding, more than 3 million have been developed in just two months.

Although specific details about the precision and reliability of these publicly created GPTs are not extensively documented, they are built on the same foundational technology as OpenAI’s well-regarded models. That suggests a baseline level of reliability. The GPT store now allows public access to these tools while ensuring user prompts and data remain private from the developers of custom GPTs. Among the offerings, the store features 10 dedicated fact-checking GPTs, including some tailored for specific languages, providing users with a diverse range of reliable options.

These developments highlight AI’s potential in combating misinformation. Fact Checker uses the power of LLMs to assess the credibility of information by cross-referencing it with a vast database of verified facts and historical data. For instance, in a workplace setting, if an employee at a tech company encounters a claim that a competitor has developed a groundbreaking AI technology poised to disrupt the market, they can use Fact Checker to verify its authenticity. The tool would analyze the claim against existing scientific literature, patents, and credible industry reports to determine its validity. 

Although Fact Checker refuses to create fake news itself for ethical reasons, it provides reasoning for its conclusions, citing issues such as lack of scientific evidence, vague sources, conspiratorial language, and historical hoaxes. Additionally, the system indicates its confidence level as a percentage regarding the information's falsity.

The Fact Checker is designed to assist both individuals and organizations in quickly identifying false or misleading claims, thereby enhancing decision-making and promoting the dissemination of accurate information. Its potential lies in its ability to provide real-time fact-checking capabilities, making it an invaluable resource in environments where the rapid spread of misinformation can have significant consequences.

Drawbacks

AI does have some limitations when it comes to detecting misinformation, however. It might mistakenly flag true information as false, especially if it’s shared by someone who is very passionate or opinionated. When evaluating AI’s ability to detect fake news, research often focuses more on removing false information and may overlook the importance of correctly identifying true information. Because of these challenges, AI should be used as a helper alongside humans, not as a standalone solution. Therefore, in uncharted territory, AI should always be a co-pilot, and we should not omit the human-in-the-loop.

 

How to Incorporate AI-Powered Fact Checking

The widespread availability of large language models that have access to virtually all information online might be just the bitter pill humans need to cure our love for sensation and scandal that makes us so susceptible to false or distorted content. Taste for controversy is the one thing artificial intelligence will never acquire. 

So, how should companies use AI to prevent the spread of false information internally?

Train а Fact Checker for Your Organization

Organizations can enhance their defenses against misinformation by customizing AI tools like GPT Fact Checker to suit their needs. This involves feeding the Fact Checker with industry-specific data and scenarios relevant to the organization. By doing so, the tool can more accurately identify and flag misinformation that could affect the company’s operations or reputation. Regular updates can ensure the AI remains effective as new types of misinformation emerge. If misinformation is a big problem for your organization, you can even invest in fine-tuning a smaller model for a specific fact-checking task.

For example, a healthcare company might train its AI Fact Checker with medical journals and regulatory guidelines. This way, if an employee receives an email with claims about a new drug’s benefits that aren’t supported by evidence, the AI can flag it for further review.

Integrating such a tailored Fact Checker into the company’s communication systems can provide real-time alerts, helping to maintain a trustworthy information environment.

Remember that AI can still be trained on less reliable information or manipulated by malicious actors to spread false narratives., however Organizations must ensure their AI tools are built on credible sources and remain vigilant against potential misuse.

Train Your Employees

Although AI tools are powerful, human judgment remains crucial in combating misinformation. Organizations should invest in training programs that enhance employees’ critical thinking and media literacy skills. Educating staff on recognizing misinformation and understanding the AI tools at their disposal can empower them to make informed decisions. You can even gamify using AI for fact-checking in times when a lot of misinformation gets spread around, such as the elections, for instance.

For instance, a company could set up a simulated news environment where employees use AI tools to identify and debunk false headlines. This exercise not only sharpens their fact-checking skills but also familiarizes them with the AI’s capabilities and limitations.

Encouraging a culture of open dialogue and skepticism towards sensational claims can further strengthen the workforce’s ability to discern truth from falsehood. By combining AI capabilities with well-informed employees, organizations can create a robust defense against the spread of fake information.

Additionally, employees should be trained to recognize when AI outputs might be based on unreliable data or could be the result of manipulation by bad actors. This awareness can help prevent the internal spread of misinformation and ensure that AI tools are used responsibly and effectively.

More on Curbing MisinformationUsing Google’s BERT to Battle Job Scams

 

Fight AI With AI

As AI technology advances, tools like GPT Fact Checker are set to become more sophisticated, enhancing their ability to distinguish fact from fiction with greater accuracy. Organizations can capitalize on these advancements by incorporating AI-driven fact-checking into their internal communications to curb misinformation. 

AI should complement, not replace, human judgment, however. Training employees to critically evaluate information and encouraging open dialogue can further improve these tools’ effectiveness. By blending AI's analytical power with human oversight, companies can foster a more informed and resilient workplace, protecting their reputation and productivity from misinformation’s harmful effects.

Explore Job Matches.