The evolution of technology has changed the dynamics of political campaigns and is influencing the outcome of elections. Social media and online posts have become pivotal in shaping public opinion. And artificial intelligence is having a significant impact, especially when fraudsters use deepfakes to influence and deceive voters.
The 2024 election is already heating up and promises to be particularly combative, with false statements and cries of “fake news” as political opponents smear one another. And digital forgeries threaten the integrity of the reality presented by every political candidate. Expect to see a surge of deepfakes going viral across the internet as campaign managers and foreign powers strive to influence the election outcome.
What Is Microtargeting?
Microtargeting is a ploy that delivers messages specifically designed to sway voters by speaking to personal interests or vulnerabilities, a political practice widely considered unethical.
Real-Life Dangers of Deepfakes and Disinformation
Deepfake technology uses generative AI to alter sounds and images, creating realistic fabrications that are often indistinguishable from the original. The term deepfake describes both the technology and the result, and usually involves altering an image, video or audio, changing the content to impersonate someone or alter its message.
Deepfakes have become especially prevalent on the web to mislead consumers and voters, often with dangerous consequences. Jordan Peele even created a deepfake public service announcement with a computer-generated President Obama to illustrate the realism of deepfakes.
One of the most recent and most publicized deepfakes was the phony Taylor Swift cookware giveaway, where a deepfake rendering of the pop star offers free cookware to fans as a come-on by a technology company. And a more disturbing example is the deepfake release of President Volodymyr Zelenskyy urging Ukranian soldiers to surrender to the Russians.
Bad actors are weaponizing generative AI technology to disseminate disinformation and sway public opinion using altered content to impersonate political figures such as President Joe Biden. The same deepfake technology is being used to generate web content, robocalls and other content disseminated across multiple media channels.
How can you trust what you see?
How GenAI Creates Confusion During Elections
Of course, generative AI isn’t just for deception. GenAI can be invaluable to any marketing program, including political campaigns. AI is a cost-effective way to create targeted and personalized marketing programs, such as by generating customized emails.
Generative AI is also useful for market segmentation, simplifying targeting by demographic, income, geography and other criteria. When appropriately applied, AI enables even the smallest campaigns to increase their impact through targeted advertising.
The same technology, however, can be used to send targeted disinformation. Deepfakes can be used to disseminate phony endorsements to deceitful political advertising. And with the same voters who are targeted for deepfake messages also receiving legitimate campaign emails, voter confusion and distrust is spiking.
Political campaigns also need to be wary of biases inherent in AI systems. Generative AI models use large data sets to train responses, including historical data, public information and web content — including the biases inherent to those sources.
Bad actors also use generative AI to create bots, or automated accounts on social media designed to spread misinformation. For example, in 2017, social media bots mixed falsified information into a campaign against Emmanuel Macron, flooding social media before the French election.
How Companies and Governments Are Fighting Back
Deepfakes represent the latest digital fraud, and detection is an ongoing problem. The first step in combating political deepfakes is recognizing the problem and committing to a solution.
Leading tech companies are already stepping up. At the Munich Security Conference, Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok committed to adopting a voluntary framework to respond to deepfakes designed to trick voters.
Some organizations, like Attestiv, provide deepfake solution technologies that are effective for protecting organizations such as insurers and media companies. The accord calls for participants to detect and label deepfakes on their platforms rather than banning and removing them. This isn’t a solution, but it is a first step in making it easier to identify deepfake messages.
While there is no federal legislation prohibiting deepfake political ads yet, there has been some progress. Some state governments are working to control deepfakes through legislation. States already have laws making impersonation illegal, but most were enacted before the internet.
Many states are enacting new legislation specifically prohibiting “deepfakes,” “synthetic media” or “deceptive media” from being used in political campaigns. Last year, Minnesota was one of the first states to make it a crime to use deepfakes to influence elections. The Federal Communications Commission has also ruled that AI-generated voice calls or robocalls are illegal.
How AI Can Detect Tampering
Detecting deepfakes is still a critical issue. Law enforcement, media platforms and voters need to be vigilant and watch out for the predatory technology. Questioning what you see online or in social media threads is a start. However, there needs to be a litmus test for authenticity.
Ironically, the same technology responsible for generating deepfakes can also be used to detect them. Just as generative AI can be used to alter photos, video and audio, it also can be used to analyze content for anomalies that reveal when content has been altered or if there are anomalies in its metadata. AI-powered technology can scan content and highlight sections that may have been altered, even providing a confidence score to reflect the likelihood of tampering.
This tech can also fingerprint or watermark digital content as authentic or as AI-generated. While standards and solutions are progressing for easier identification of photo, video, audio or document files, their use is unlikely to be widespread in 2024, and they are even less likely to be used by parties wishing to spread misinformation.
Stay Discerning This Election Season
Identifying and labeling deepfakes can help combat disinformation during an election, but the volume of deepfake content is growing, and only select media sources are likely to adopt deepfake detection. Ultimately, the best defense against deepfakes is education and using available detection tools.
For the former, the public must be more aware of deepfakes and more discerning about digital information. Just as consumers have learned to spot phishing attempts, voters must learn to spot disinformation. One place to start is by relying only on credible sources of information. It also pays to verify questionable content across other trusted sites. Political candidates can also fight back with public awareness campaigns. For the latter, now is the time to build a defense of tools to help discern the fakes.
Technology is enabling new channels to reach voters and providing new tools to mislead voters with manufactured information. Fighting deepfakes will require a combination of technology and public education.
Tech leaders and social media companies must be more aggressive in identifying, labeling and removing deepfakes. Voters must also be careful to distinguish between genuine and counterfeit information in this rapidly evolving age of AI.