Skip Navigation
Expert Brief

Preparing to Fight AI-Backed Voter Suppression

AI tools could make it easier than ever for bad actors to wage campaigns aimed at suppressing votes.

Published: April 16, 2024
View the entire AI and Democracy series

On the eve of the 2024 New Hampshire primary, robocalls impersonating President Joe Biden counseled voters against partaking in a write-in campaign supporting Biden, urging them to “save your vote” for the general election. It was the first known instance of the deployment of voice-cloning artificial intelligence at significant scale to try to deter voters from participating in an American election. A political operative later admitted to commissioning the scheme; creating the fake audio reportedly cost just $1 and took less than 20 minutes. Similar attempts are almost certain to plague future elections as the rapid uptake and development of generative AI tools continue apace.

This phenomenon is not entirely new — vote suppression through disinformation has a long provenance in the United States. Since Black Americans and other Americans of color gained the formal right to vote, malefactors committed acts of terror to intimidate voters and pressed for restrictive election laws that created unjustifiable barriers to voting. These suppression efforts have taken the form of deceptions to prevent minority citizens from voting for at least 25 years. Similarly, antagonists of American democracy have removed eligible voters from registration lists, specifically targeting minority voters. From the Reconstruction era to the digital age, these strategies have persisted and evolved, retaining core elements even as new technologies and platforms have allowed for more precise and rapid targeting of voters. 

AI has the capacity to supercharge these risks, breathing new life into dated chicanery and adding more burdens to the right to vote. Generative AI introduces the possibility of more sophisticated methods of deception, capable of being deployed more cheaply and swiftly on a wider scale. AI’s persuasive potential may increase over time as current technological limitations are quickly surpassed and different forms of AI are coalesced in new ways. Some kinds of AI systems will allow election deniers and other discontents to submit mass private challenges to voters’ registration statuses more expediently — possibly with even less transparency and with a novel patina of faux legitimacy.

While it remains unclear how much AI will change the face of vote suppression in the 2024 general election, new developments in AI use and capabilities lend fresh urgency to long-standing efforts to abate attempts to subvert elections. Those developments necessitate strong new policy interventions to minimize the dangers on democracy’s horizon.

The Pernicious History of Attempted Vote Suppression

Malfeasants have long deployed deceptive practices to try to suppress the vote in the United States, often by targeting low-income, Black, immigrant, and other marginalized communities. For decades, malefactors have circulated false information about how, when, and where to vote. They have also spread baseless claims about adverse consequences, such as arrests or deportation, befalling groups of eligible voters who exercise the franchise. Historically, such deceptions manifested in flyers and other materials placed in voters’ physical environments, and later robocalls and other mass communications. On Election Day in Texas’s 2020 presidential primary, for instance, robocalls falsely informed people that voting would take place a day later, attempting to trick voters into arriving at poll sites too late to cast a lawful ballot. 

In the digital age, malcontents increasingly promote falsehoods online about the election process, microtargeting specific voter demographics with greater precision in local, state, and federal races. Such deceptions often draw on common, recurring misinformation tropes about the voting process and experience. In the 2018 and 2020 elections, operatives targeted Latino communities online by circulating false claims that Immigration and Customs Enforcement (ICE) officers were patrolling voting locations. Other vote-suppression attempts rely on alternative methods of deception — for example, malefactors assuming false identities that mirror their targets’ race, tapping into potential voters’ doubts about the political efficacy of voting, and urging citizens to boycott elections.

In some cases, agents of disinformation are held accountable for vote-suppression efforts under existing laws prohibiting voter intimidation and election interference. In the 2016 election, a prominent influencer promoted messages deceptively informing Black voters that they could cast ballots online or by phone. The influencer, Douglass Mackey, shared a fake advertisement — purportedly from Hillary Clinton’s campaign — depicting a Black woman with an “African Americans for Hillary” sign urging voters to “Avoid the Line” and “Vote from Home.” In 2023, a federal jury convicted Mackey of a criminal conspiracy to deprive Americans of the constitutional right to vote. Yet untold other purveyors of voting misinformation fall through the cracks. Some deceptive vote-suppression efforts have gone unrestrained because intent is difficult to prove or because the transmitted content admits ambiguity.

Federal law does restrict political robocalls made with automated dialing software. As the Federal Communications Commission (FCC) recently clarified, federal law also restricts the use of voice-cloning AI in calls. The FCC issued a declaratory ruling classifying AI-generated voices in robocalls as “artificial,” as governed by existing law. But companies and operatives may make robocalls — including those that employ AI voice generation — to consumers and voters with their prior express consent. Federal law also continues to allow political robocalls to landlines — including those that use voice-cloning AI — without prior consent. (Landlines are disproportionately used by Americans over the age of 65.) And it remains relatively easy for voters to give unwitting consent to receiving a political robocall. 

Wrongful voter purges and challenges can also function as a form of vote suppression. Ostensibly, voter purges are efforts to clean up registration lists by removing ineligible voters from rolls. Jurisdictions sometimes perform such purges unlawfully, irresponsibly, or without accurate data. Most states also allow private individuals to lodge challenges to registered voters’ eligibility before or on Election Day — rules that have historically been exploited to target eligible minority voters. In recent years, activist group coteries — often mobilized by prominent election deniers, politicians, and pundits — have combed through voter rolls alongside incomplete and flawed sources of external information in an attempt to substantiate bogus claims of election fraud and fuel mass challenges to vote registrations.

The National Voter Registration Act (NVRA) and other federal and state laws set baseline standards for conducting voter purges, but additional safeguards are needed to reduce the risk of erroneous or deceitful disenfranchisement. In recent years, Georgia passed a law that heightens the likelihood of wrongful disenfranchisement by confirming that registered voters in the state can field unlimited challenges to other voters’ eligibility. The incidence of voter purges also increased in the years after the Supreme Court gutted critical Voting Rights Act protections in the infamous Shelby County v. Holder case.

The Compounding Threat of AI

Coming elections may see election deniers deploying AI systems to fuel specious investigations, challenge massive numbers of voters’ eligibility, and lend phony sophistication to attempts to validate baseless claims of widespread voter fraud. These concerns are no mere speculation: in the lead-up to the 2022 election, misguided groups of citizens — in some cases mobilized by well-funded and coordinated election denier groups such as the Conservative Partnership Institute, combed through voter rolls and other sources of often-suspect data, performing faulty and error-prone analyses to file mass voter challenges.

These efforts, which sought to strip tens of thousands of voters from registration rolls, undermined public faith in the integrity of elections. They relied on rudimentary data matching using the National Change of Address list, a portal from the government contractor Schneider Geospatial, public map services, tax assessor data, and other incomplete sources of information. In Georgia, a handful of activists challenged close to 100,000 voter registrations, creating challenges for some unhoused voters and those with health conditions who, among others, were forced to rapidly defend their right to vote or relinquish it. In one Georgia county, an organization challenged the eligibility of more than 6 percent of the county’s total registered voters, forcing overburdened county officials to validate tens of thousands of voters ahead of the election.

AI tools are poised to strengthen these suppression efforts and lend them more undeserved credibility. To file challenges, groups of activists are now using EagleAI, seemingly an AI algorithm-driven tool that purports to identify suspicious voter registrations by performing automated matches of voter data against public databases of varied and sometimes questionable quality — such as “scraped or sourced funeral home obituaries” (which may themselves be false, AI-generated clickbait), land use codes, “scrapped business addresses,” information from the Department of Health, prison data, and so on — collated from across the web or imported through private channels and integrated into the EagleAI system. EagleAI also fast-tracks the process of preparing voter challenge forms by allowing activists to do so with just a few clicks. Its makers claim that it employs a “multi-level match factor” through “the use of AI” and “multi-tiered algorithms,” and that it can pull matches with the “highest tier confidence.” 

Using tools like EagleAI, malcontents can file mass voter challenges on an alarming scale based on shoddy, unverified, or incomplete evidence. According to one demonstration, Eagle AI also stores challenge forms and allows activists to file forms on the last possible day for such submissions, burdening election officials close to an election and minimizing the amount of time for review. At least one Georgia county has recently approved EagleAI’s use for voter roll maintenance, meaning that not only can private citizens use it to challenge votes, but also that it may potentially serve as a flawed official arbiter of those same challenges. These revved-up efforts to strip voting eligibility take place against another ominous backdrop: in 2018, a decades-old consent decree that previously restrained the Republican National Committee from improperly targeting minority voters when lodging voter challenges was lifted

Data matching, automated and otherwise, is riddled with pitfalls and — without additional safeguards — is an unreliable method of protecting election integrity. Common names, transposition errors, outdated or incorrect data, rental relationships, and diverse living arrangements all contribute to the perils of relying on data matching alone to verify registration records. Federal law builds in some guardrails to protect voters against disenfranchisement when officials use data matching to maintain voter registration lists and verify voters’ identity. But the capacities, complexities, and opacities of AI systems employed to analyze said data pose untold risks: without more detailed explanations from providers as to what those algorithms entail, it can be difficult (if not impossible) for users, observers, officials, and regulators to evaluate or understand the systems’ outputs. 

Doing more to protect eligible voters against wrongful disenfranchisement is critical, especially in an era when AI’s functionality and adoption are both skyrocketing. EagleAI — and the potential development and proliferation of similar AI tools — may significantly increase the numbers of voter challenges and purges in 2024 and beyond. Using such tools, malefactors and misguided citizens alike will be able to cast doubt on countless voters’ eligibility both ahead of and on Election Day, swiftly and systematically drawing from a sweeping range of sometimes dubious and often partial sources more easily than ever. Such efforts risk overwhelming already overburdened election officials and preventing them from fully serving voters at a critical moment in the elections process. They also risk exacerbating false claims about election integrity by deploying simple or flawed algorithms to produce unreliable conclusions while cloaking bad-faith mass challenges to voters’ eligibility in a meritless veneer of sophistication. 

New Dangers for 2024 and Beyond

How significant a role generative AI will play in disinformation-driven efforts at vote suppression in the 2024 election remains to unfold, but troubling possibilities abound for 2024 and beyond. Generative AI may change the speed, scale, and sophistication of traditional deceptive practices used in vote-suppression attempts.

As seen in New Hampshire, deceptive robocalls intended to deter voters or to spread falsehoods about voting could gain an added layer of sophistication with AI voice-generation technology that can convincingly mimic trusted and authoritative messengers. In the worst-case scenarios in 2024, on or before Election Day, malefactors could swiftly and cheaply create and share AI-generated audio or video deepfakes to fabricate spurious emergencies at vote centers, deceptively depict damaged voting equipment or ballot drop boxes, or falsely portray officials preventing eligible voters from casting ballots. They could deploy generative AI chatbots to spread falsehoods about where, when, and how to vote at a vastly increased pace and scale. Disinformation campaigns could produce reams of AI-generated content to more effectively manipulate the recommendation algorithms that help curate social media content, such that falsehoods about the voting process trend and go viral. They could also harness generative AI to spoof election websites more quickly and on a larger scale to publicize deceptions about the voting process.

Generative AI tools are already transforming the landscape of election deception. In time, the proliferation of open-source chatbot technology could make it even easier and cheaper for malcontents to repurpose generative AI models for vote-suppression efforts and other malicious purposes. Well-resourced corporations, individuals, and state-aligned organizations could train AI models on influence or manipulation techniques to influence and deter voters. They might specially train large language models (or LLMs, the technology underlying generative AI chatbots) — potentially coupled with voice- or video-generation algorithms — to produce false information about the voting process and democratic systems. Such models could be highly effective disinformation mechanisms, operating as a force multiplier for persuasion-based suppression attempts. 

Antagonists could deploy AI systems to interact with voters directly, combining voice synthesizers and robodialers, exploiting encrypted social media platforms, or creating customized chatbots to foster apathy about voting or disenchantment with American democracy. Even before the generative AI boom, encrypted platforms had become hot spots for deceptive bot-driven activity in Brazil, and disinformation campaigns on these platforms particularly affected U.S. voters in Latino and Indian American communities. The spread and popularization of generative AI tools allow similar campaigns to level up in strategy and execution. 

One cannot expect AI systems to magically transform disinformation’s persuasiveness in the face of other factors that strongly shape beliefs, such as partisanship, motivated reasoning, and access to reliable sources for election facts. But AI tools would enjoy several advantages that may, over time, be game changers. The foremost of these is scale: even a small effect multiplied by the millions could become meaningful. On the other end of that spectrum, generative AI may grow disinformation campaigns’ persuasion power at the individual level. Interactive AI systems can adapt in real time to a voter’s responses; given time and enough input, they might be trained to calculate optimally persuasive arguments tailored to an interlocutor’s positions, or to more accurately predict a voter’s emotional state by analyzing tone or mannerisms.

AI developers might employ reinforcement learning to make algorithms more effective over campaign cycles through repeated interactions with the same voter. Systems could also be optimized to exploit voters’ racial, ethnic, religious, or other demographic characteristics through enhanced microtargeting techniques. Current iterations of AI are understood to be relatively blunt instruments when it comes to detecting emotions, and at least one study has cast doubt on the microtargeting potential of OpenAI’s GPT-4. But generative AI capabilities are bound to become more sophisticated — especially as developers create and train new models for special purposes. Sustained interactions with voters — for instance, through an enhanced version of Replika, a social chatbot designed to “bond” with humans — could allow for subtler forms of long-term persuasion. While such tools may lack the gestures and mannerisms that bolster the persuasiveness of human-to-human interactions (unless connected to a sophisticated video avatar), they would have the advantage of being able to generate and select from innumerable messages tailored to deter voters, among other nefarious goals. And they may engender more trust from users than one-off interactions.

In the 2024 election and beyond, foreign influence campaigns that engage in vote-suppression activities stand out as major potential beneficiaries of generative AI advancements. Several U.S. adversaries have been known to interfere in American elections, sometimes by seeking to depress turnout and deter certain groups from casting ballots. For example, analyses by the Senate Intelligence Committee and Oxford’s Computational Propaganda Research Project found that Russia attempted to suppress Black voter turnout during the 2016 election by exploiting social media platforms, producing content designed to fuel racial tensions, urging Black voters to follow fake voting procedures, and recommending that Black voters boycott the election. Such campaigns are not limited to Russia: information operations that target American voters have also been deployed by China, IranNorth Korea, Saudi Arabia, and other global powers. 

Although evidence is limited that foreign influence operations successfully altered American election outcomes in the recent past, new generative AI tools could make future disinformation campaigns more widespread and more effective. Generative AI might augment foreign influence campaigns in several ways. Such tools lower the cost of mass digital deception, which in turn could increase the scale of such efforts. Content might become more sophisticated and harder to detect as well. Earlier operations backed by Russia and other influence outfits often relied on mass duplicates and produced content marred by mistranslations, grammatical mistakes, and misused idioms. Sophisticated generative AI tools would likely help blunt those flaws, rendering the products churned out by disinformation mills less dissonant and more varied — and thus less detectable and potentially more persuasive. As described above, interactive disinformation techniques propelled by generative AI may also worsen matters. 

Influence campaigns have used early iterations of generative AI technology for several years, synthesizing profile pictures of nonexistent people to prop up bot accounts. But in early 2023, Chinese state media feted its debut of virtual news anchors created by video-generation AI, and China-aligned bot accounts shared videos of AI-generated TV hosts for a fake news outfit spouting propaganda. Microsoft announced in September 2023 that it had detected suspected foreign influence operations using AI-generated images, and earlier this month it found that Chinese state-affiliated actors are now spreading AI-generated content designed to sow domestic division in the United States and other nations. In other words, the use of more sophisticated generative AI tools by states and state-aligned groups to shape the political information environment is not only imminent — it is already occurring.

Solutions

Amend deceptive practices laws. 

The Deceptive Practices and Voter Intimidation Prevention Act is a well-tailored federal bill designed to curb vote-suppression efforts that involve false claims about when, how, and where to vote, knowingly spread with the intent to prevent or deter voting within 60 days before a federal election. Similar state-level legislation addressing false claims about voter registration information and the time, place, and manner of elections exists in KansasMinnesota, and Virginia, for example, and other states are considering moving ahead with such efforts.

While such deceptive practices bills and laws could be read to cover some vote-suppression activity propelled by generative AI, a modest addition to the bills and legislative text could help ensure accountability for those who purposely develop AI systems designed to deceive voters about the voting process — and whose AI tools subsequently communicate false information about when, how, and where to vote.

As a preliminary matter, legislation should expressly include the development and intentional dissemination of AI tools. Additionally, under several existing laws and bills that address deceptive election practices, liability for deceiving voters about the voting process requires that an antagonist know that they communicated a claim that was false. But because ill-intentioned AI creators may not have contemporaneous knowledge about the falsity of each piece of content produced by their algorithms, legislators should amend deceptive practice laws and bills to remove any requirement that generative AI developers possess precise knowledge of false claims rendered to become liable; the minimum legal standard should be knowledge that a tool is designed to produce false claims. 

Limit the spread of additional risky AI-generated content that endangers voting rights. 

Federal and state laws should also bar or limit the dissemination of synthetic visual and audio content — including AI-generated content that falsely depicts damage to or impediments to the use of voting machines, voting equipment, or ballot drop boxes; manufactures disasters or emergencies at polling places; or falsely portrays election workers preventing or hindering voting — where such content is created or spread with the purpose of deterring voters or preventing votes within 60 days before Election Day.

While many deepfake bills and laws have zeroed in on AI deepfakes that damage candidates’ reputations and electoral prospects, deepfakes that threaten the right to vote merit similar attention from lawmakers and would likely enjoy a lower level of constitutional free speech protection. Similar to the Deceptive Practices and Voter Intimidation Prevention Act, such laws should compel state attorneys general to share accurate corrective information upon receipt of a credible report of vote-suppressing content being disseminated if election officials fail to adequately act to educate voters as needed.

In addition, such laws should not be limited to AI-generated content, but rather should extend to all visual and audio content created with substantial assistance of technical means, including Photoshop, computer-generated imagery (CGI), and other computational tools.

Require labeling and other regulation for some content generated by chatbot technology or distributed by bots.

Much attention in Congress and the states has focused on labeling or otherwise regulating visual and audio AI-generated content (including deepfakes) in the election context. But the technology behind generative AI chatbots poses different dangers, particularly when it comes to vote suppression through exploitative microtargeting and manipulative interactive AI conversations. Campaigns’ and political committees’ use of LLM technology, too, raises potential democratic concerns — for instance, a generative AI chatbot making promises to voters that are untethered to the campaign’s actual platform.

Federal and state laws should compel campaigns, political committees, and paid individuals to label a subset of LLM-created content. Such labeling requirements and other regulatory efforts should focus on AI-generated content distributed by bots wherein the bots deceptively impersonate or masquerade as human; interactive LLM-driven conversations between campaigns, PACs, and voters online and through robocalls; campaign and PAC use of LLMs to communicate with voters with minimal human supervision and oversight; and AI-generated communications from campaigns and PACs that microtarget voters based on certain demographic characteristics or behavioral data.

Regulate the use of AI to challenge voters and purge voter rolls.

The NVRA sets important limits on voter purges, but more guardrails are needed to protect voters from frivolous voter challenges and from the misuse of AI in voter challenges and removals. Congress and state legislatures should set baseline requirements governing the official use of AI systems to remove voters from rolls. Lawmakers should direct agencies to flesh out these requirements through regulation and to update rules as AI technologies evolve. To guard against improper voter disenfranchisement, legislatures and agencies should set thresholds for the accuracy, reliability, and quality of training data for AI systems used to assist officials in conducting voter purges. And they should require human staff to review all AI-assisted decisions to remove a voter from the rolls. 

State legislatures and officials should also make changes to voter challenge procedures and requirements. In states that allow private citizens to file eligibility challenges, policymakers should shield voters from frivolous challenges, set requirements for documentation and evidence needed to substantiate a challenge, and impose constraints on evidence acceptability. As a preliminary matter, federal and state policymakers should reject the use of bots to transmit automated challenges to voters’ registrations to election offices. States should also require that private challenges be based on firsthand knowledge of a voter’s potential ineligibility — which does not include the use of AI-assisted or other forms of automated database matching.

Limit certain kinds of AI systems that infringe on autonomy and privacy and permit sophisticated forms of voter manipulation.

Congress and state lawmakers could regulate the creation and deployment of certain high-risk AI systems where such systems are used to influence elections and votes, including those designed to employ subliminal techniques, recognize emotions, conduct biometric monitoring, and use biometrics to assign people to categories such as racial groups. The European Union is seeking to impose a ban on real-time biometric surveillance and emotion recognition AI in certain contexts (including employment), as well as to prohibit AI systems that utilize subliminal techniques to harm people or distort their behavior. American lawmakers should limit similar AI tools used to manipulate voters, distort their behavior, or infringe on their personal autonomy or privacy interests. One approach would be to create a certification regime for the use of AI tools with these manipulation capabilities in sensitive contexts.

Strengthen regulation of political robocalls.

Policymakers should strengthen regulation of political robocalls to better protect voters from AI-boosted deception efforts. As mentioned above, the FCC recently confirmed that existing law reaches robocalls containing voice-generation AI, but substantial loopholes remain. Federal and state lawmakers should close the loophole that allows political robocalls — including those made through automated dialer systems and those that use AI-generated voices — to landlines without prior consent. Policymakers should also clarify that consent to receive a political robocall containing voice-generation AI must be preceded by clear notice of generative AI use and must demonstrate comprehension that generative AI will be deployed (rather than more general consent to receive political outreach from an organization). This requirement would give voters greater protection from AI-generated robocalls coming from political campaigns and PACs with whom they have interacted in the past. 

Additionally, lawmakers should compel phone carriers and mobile device manufacturers to integrate high-quality tools that can screen calls for voice-generation AI and alert customers to a high likelihood of an AI-generated robocall.

Support election offices’ efforts to educate voters and defend against spoofing and hacking.

Election offices can take several steps to reduce the risks of AI-enhanced attempts to deceive people about the voting process. Most election office websites do not operate on a .gov domain — a web domain only for use by verified U.S. government entities — even though fraudsters sometimes spoof election websites to trick voters, a strategy that could become more common with the proliferation of generative AI tools. Migrating to .gov domains and educating constituents about the domain’s significance are straightforward ways to bolster the credibility of official election websites. Federal funds and support facilitate this important step, which can help election officials preempt and “prebunk” recurring false narratives about the election process through accessible materials and resources. 

Election offices should promote messages about the robustness of existing election security safeguards via their websites — a method that evidence suggests increases confidence in the election process across the political spectrum. They should also maintain up-to-date rumor control pages on their websites, develop crisis communications plans to rebut viral rumors that threaten to deter voters on or ahead of Election Day, and establish networks among hard-to-reach communities to share accurate and timely information about how to vote.

Pass the Freedom to Vote Act.

AI amplifies myriad long-standing election concerns, and vote suppression is no exception. Although it does not directly address AI, the proposed Freedom to Vote Act would safeguard voters and election administration by affording voters a wide range of critical protections against vote suppression. These measures include protections against improper voter purges. The proposed law would also guard against certain deceptive practices that risk disenfranchising voters by incorporating the Deceptive Practices and Voter Intimidation Prevention Act.