Skip Navigation
Explainer

How to Detect and Guard Against Deceptive AI-Generated Election Information

Time-tested fact-checking practices will help limit the effectiveness and spread of misleading election information.

View the entire AI and Democracy series

Generative artificial intelligence is already being deployed to mislead and deceive voters in the 2024 election, making it imperative that voters take steps to identify inauthentic images, audio, video, and other content designed to deceptively influence their political opinions.

While election disinformation has existed throughout our history, generative AI amps up the risks. It changes the scale and sophistication of digital deception and heralds a new vernacular of technical concepts related to detection and authentication that voters must now grapple with. 

For instance, early in the generative AI boom in 2023, a cottage industry of articles urged voters to become DIY deepfake detectors, searching for mangled hands and misaligned shadows. But as some generative AI tools outgrew these early flaws and hiccups, such instructions acquired greater potential to mislead would-be sleuths seeking to uncover AI-generated fakes.

Other new developments introduce different conundrums for voters. For example, major generative AI and social media companies have begun to attach markers to trace a piece of content’s origins and changes over time. However, major gaps in usage and the ease of removing some markers mean that voters still risk confusion and misdirection. 

Rapid change in the technology means experts have not reached consensus on precise rules for every scenario. But for today’s voters, here is the most important advice:

  • Employ proven practices for evaluating content, such as seeking out authoritative context from credible independent fact-checkers for images, video, and audio, as well as unfamiliar websites.
  • Approach emotionally charged, sensational, and surprising content with a cautious lens.
  • Avoid getting election information from generative AI chatbots and search engines that consistently integrate generative AI, and instead go to authoritative sources such as election office websites.
  • Exercise responsibility when sharing political content that may be generated by AI.

Develop best practices for evaluating content.

To effectively navigate this new landscape, voters should adopt a critical approach toward both the information they consume and its sources. When confronted with sensational images, video, or audio, new information about voting, or details about the election process from an unfamiliar or unverified website or account, voters should: 

  • Evaluate the legitimacy and credibility of the content or media — investigating the source’s background can help prevent misinterpretation or manipulation. 
  • Go straight to a credible independent fact-checking site — such as PolitiFactAP Factcheck, or another site verified by the International Fact-Checking Network — or the Artificial Intelligence Incident Database operated by the Responsible AI Collaborative to try to verify the authenticity of content or to submit content to such sites or databases for verification. Using search engines is not the best first step because they sometimes return inaccurate content based on a user’s search history.
  • Approach emotionally charged content with critical scrutiny, since such content can impair judgment and make people susceptible to manipulation.
  • Maintain a balanced approach to evaluating election information. While a degree of skepticism towards some online election-related content is necessary, excessive analysis of generic images or videos can be counterproductive, providing opportunities for bad actors to discredit authentic information.

Know that AI improvements mean fewer clues.

We do not encourage voters to spend time searching for “tells,” or visual errors, such as misshapen hands, impossibly smooth skin, or misaligned shadows. Generative AI tools are getting better at avoiding such errors; for example, a mid-2023 software update from Midjourney — a popular generative AI image creation tool — significantly improved the quality of the tool’s rendering of human hands. Of course, if a visual error is obviously noticeable, voters should be more skeptical of the image or video and seek out additional context and verification for it.

Look out for labels describing content as manipulated.

As states race to regulate deepfakes, voters must become familiar with the terminology used to indicate AI-generated or artificial content. New state laws in New YorkWashingtonMichigan, and New Mexico, for instance, limit the spread of political AI deepfakes by requiring use of disclaimers on some election-related synthetic content. Laws typically require that disclaimers contain language similar to “this [image, video, or audio] has been manipulated.”

Consider “content provenance” information but realize its limits.

New standards aim to give information to voters on the creation and edit history of images, video, and audio, but are not yet widely adopted by major social media and search companies.

Meta and Google have indicated that they will soon start using such standards and credentials but have not done so consistently yet. Several prominent companies have also signed on to the Coalition for Content Provenance and Authenticity, an open technical standard for tracing media. But even after initial incorporation of those standards, until major companies consistently and universally integrate content credentials into mobile cameras, for example, voters cannot rely on the absence of provenance information to help disprove the authenticity of content.

Further, many generative AI tools — especially those that are “open source,” or have publicly available source code and other information — will not abide by these norms or allow such features to be easily disabled, giving the standards limited utility for voters.

If we do achieve wide implementation of content provenance information — which might include a drop-down label to an image or video that explains how it was created and edited over time, voters should adopt the following general approach to evaluating information: 

  • Where provenance information indicates that political content has been visually manipulated, seek out additional context to help verify the source.
  • Where provenance information is missing or invalid, do not automatically make assumptions regarding the content’s authenticity or inauthenticity. Seek out additional information and context to verify the source.
  • Unless content is given a straightforward label by a social media platform like “made by AI,” do not over-rely on provenance information to verify or disprove the authenticity of political content. Treat it as part of your set of tools to verify content, which should include consulting with credible fact-checking sites. 

Avoid over-relying on AI detection tools. 

In general, voters should not depend on deepfake detection tools to verify or disprove the authenticity of political content because the tools have limited accuracy. And, detection tools’ effectiveness will likely wax and wane as AI-generation tools themselves become more sophisticated.

If a voter chooses to use a deepfake detection tool, they should find one that is clear about the possibility of error and transparent about the level of confidence of analysis, such as by offering a percentage likelihood of accuracy. TrueMedia.org offers one such tool, but it is not yet universally available to the public.

Be cautious when using search engines that integrate generative AI and chatbots. 

Some search engines, such as Microsoft Copilot and Perplexity, integrate generative AI into their responses. These engines create risks for users who may use them to search for information about elections. 

Microsoft Copilot’s responses to basic election questions in certain global elections were rife with errors, a 2023 study found. Recent research also suggests that popular AI chatbots — such as Google’s Gemini, OpenAI’s GPT-4, and Meta’s Llama 2 — can give incorrect responses to simple election questions.  While these chatbots sometimes redirect users to official election information sources, it is better to go directly to an authoritative source to find accurate information, such as your local county election website, the National Association of Secretaries of State website, or vote.gov

Google has begun experimentally integrating generative AI into some search results through an “AI overview” panel at the top of the search results page. When it comes to election information, voters should not rely on these AI overviews. However, voters can typically rely on a Google knowledge panel about election information when conducting an election query if it’s not based on generative-AI.  

•  •  •

Finally, voters should act responsibly when sharing their own and others’ AI creations. They should assess the potential for harm or misinformation before sharing generative AI content, provide disclosures for political AI-generated content, and verify the accuracy of information from multiple reliable sources before sharing.

In the end, government and technology companies must act to make voters’ tasks easier. But in the absence of adequate action, the strategies above offer voters a path to better inoculating themselves and others against deception exacerbated by generative AI in the 2024 election.