Skip Navigation
Expert Brief

States Take the Lead in Regulating AI in Elections — Within Limits

Legislators are very focused on addressing deepfakes of themselves, but less so on the other potential harms of artificial intelligence to our democracy.

Published: August 7, 2024
deepfake
Tero Vesalainen/Getty
View the entire AI and Elections collection

The artificial intelligence boom has been accompanied by an upsurge in state legislation aimed at regulating this new technology. As many state legislatures close out their 2024 legislative sessions, the Brennan Center reviewed the trends in the AI-related bills that states introduced and passed so far this year. From January 1 to July 31, 2024, 14 states have enacted new laws or provisions to regulate the use of deepfakes in political communications.

However, there has been remarkably little new policy in a host of other areas related to AI and democracy, from how AI is used in election administration to how it could be used to attack the security of our elections or suppress votes, though states have enacted legislation to address AI discrimination in housing and employment, AI generated child sexual abuse material, and consumer privacy. In this review, we focus particularly on legislation that is directly or indirectly related to AI and elections.

Legislation Addressing Deepfakes in Political Communications

Late last year, the Brennan Center looked at how states were tackling the regulation of artificial intelligence following the emergence of tools that can create sophisticated deepfake text, images, audio, and video that convincingly mimic real people. As many state legislatures close out their 2024 sessions, we return to examine further action by lawmakers around the country.

As of July 31, a total of 151 bills addressing deepfakes and deceptive media in the elections context have been introduced or passed so far this year. This represents approximately one-quarter of all laws introduced on the general topic of artificial intelligence. Most of these bills — at least 100 — specifically targeted deepfakes and other deceptive media in political communications to the public.

 

As detailed in previous writings by the Brennan Center, the range of what and who these bills cover varies greatly, as does the purported rationale behind them, the entity or individuals charged with enforcement, the subject matter covered, and the penalties associated with violations of the law. In general, these bills encompass deceptive communications about candidates or content created to deceive voters into voting for or against candidates.

One point of distinction among these laws is whether they outright ban the use of deepfakes and other manipulated media in political communications or if they permit them as long as they are labeled. Similarly, in most states, the ban or disclosure requirement comes into effect sometime before the election (usually 90 or 120 days prior to Election Day), while in some states, the requirement has no time limit. So far this year, states have only enacted new disclosure laws rather than outright bans — although Minnesota revised the ban it passed last year.

 

Legislation Addressing Other AI Threats to Democracy

The number of state bills that have been introduced and passed that seek to regulate deepfakes in political communications in 2024 is nothing short of astounding. But when it comes to regulating the use of AI in other ways that could impact American democracy, statehouses have been relatively quiet.

The Brennan Center has urged policymakers to consider policies including the expansion of voter intimidation laws to cover AI-generated content used to intimidate voters, additional protections of election processes against AI-assisted attacks, regulation of deepfakes used to suppress votes or falsely allege election fraud, limits and labels on the use of AI-supported chatbots and robocalls, new guidance for election officials on how to use AI in election administration, and safeguards on the use of AI to purge or challenge voters. Others have noted that AI poses both promise and peril in the area of redistricting.

As the sections that follow show, few bills covering these topics have been introduced, and even fewer have passed.

Voter Suppression

Many state laws meant to counter voter suppression can be read to cover some voter suppression activity supported by generative AI. However, as the Brennan Center has noted, “a modest addition to these laws could help ensure accountability for those who purposely develop AI systems designed to deceive voters about the voting process,” as well as those whose AI tools communicate false information regarding the time, place, and manner of voting. So far, few states have attempted to make such changes.

One state, Mississippi, recently enacted legislation that criminalizes the intentional distribution of a “digitization” within 90 days of an election that “is disseminated with the intent to injure the candidate, influence the results of an election or deter any person from voting" (emphasis added). The law defines digitization as ”alter[ing] an image or audio in a realistic manner utilizing an image or audio of a person, other than the person depicted, computer-generated images or audio, commonly called deepfakes." Digitization also includes “the creation of an image or audio through the use of software, machine learning artificial intelligence or any other computer-generated or technological means.”

A bill in Illinois sought to update the state’s voter intimidation laws to amend the definition of “deception or forgery” within the election code to include the creation and distribution of deceptive social media content that is likely to dissuade voter participation. It did not pass.

Threats to Election Workers and Administrators

The primary reason for regulating deepfakes at the state level appears to be to promote an informed electorate by banning or labeling synthetic media that could mislead voters about the conduct of candidates for political office. There are other potential rationales for regulating deepfakes in the elections space, however. Among them, as the Brennan Center has noted, is safeguarding the electoral process and protecting election workers, who are not public figures but have suffered vicious threats and intimidation in recent years fueled by defamatory statements and lies about the election process. AI deepfakes could exacerbate this already dangerous situation.

To proactively protect against the threats to election administrators and workers, some states have considered criminalizing the use of deepfakes and other deceptive media to mislead the public about the elections process in close proximity to elections. None of these bills have yet passed.

For instance, in California, two separate bills reference deceptive content showing an elections official doing or saying something related to their job that “is reasonably likely to falsely undermine confidence in the outcome of one or more election contests.” One of the bills also extends the deceptive content restriction to “a voting machine, ballot, voting site, or other property or equipment related to an election in California portrayed in a materially false way.” Both bills have passed the California assembly and are now with the senate.

Similarly, Georgia proposed a bill that would have criminalized as “fraudulent election interference” the publication of “materially deceptive media within 90 days of an election with the intent to deceive . . . [and create] confusion about the administration of such election.” New Jersey has a similar proposal that criminalizes “knowingly or recklessly disclos[ing] deceptive audio or visual media with the intent to deceive a voter with false information about the candidate, the public question, or the election” within 90 days of the election.

Attacks on the Elections Process

The Brennan Center and others have warned that AI could be used to aid more sophisticated attacks against election offices and processes, by for example, using deepfakes to fool election workers into taking actions that threaten the integrity of election processes. At least two states have looked at extending the regulation of deepfakes to include those that target official proceedings, including elections.

Legislators in Kentucky failed to pass a bill that would have criminalized the distribution of a “deepfake” that “could be reasonably expected to affect the conduct of any administrative, legislative, or judicial proceeding, including the administration or outcome of an election” (emphasis added). New Jersey also considered a bill that would have expanded the scope of its identity fraud statute to include “the alteration of a public policy debate or election” and “improper interference in an official proceeding” using deepfake technology. Lawmakers in Illinois have proposed a bill that more broadly focuses on the use of a “deepfake” to disrupt an official proceeding, which would presumably include elections. The Illinois and New Jersey bills have yet to pass.

Election Misinformation from Chatbots

State legislators have also proposed bills on the use and disclosure of AI with respect to chatbots in the electoral context. So far, we have not seen such bills pass at the state or federal levels.

Lawmakers in several states proposed requirements for disclosures when people are interacting with chatbots, which we define as an automated online account where all or substantially all the posts are created by AI.

One New York bill proposed a disclosure requirement specifically for “any political communication, whether made by phone call, email or other message-based communication, that utilizes an artificial intelligence system to engage in human-like conversation.” Other proposals aimed to regulate chatbots more broadly, including another New York bill that would have required any “owner, licensee or operator of a generative artificial intelligence system” to “conspicuously display” a warning to users that the system may be inaccurate or inappropriate. Similarly, a California bill would have required disclosures in all interfaces (visual and audio), as well as “affirmative consent” at the beginning of the conversation.

An Illinois bill likewise would have banned the use of bots absent disclosure, both in commercial transactions as well as the use of a bot “to influence a vote in an election.”

At the federal level, the bipartisan AI Labeling Act of 2023 would require generative AI developers to include a “clear and conspicuous” notice identifying AI systems, including chatbots, as producing AI-generated content. While not limited to use in the electoral context, the bill would apply to chatbots in political communications. Introduced by Sens. Brian Schatz (D-HI) and John Kennedy (R-LA), the bill was referred to the Committee on Commerce, Science, and Transportation last year and has not progressed since. A similar bipartisan bill was proposed in the House in March by Reps. Anna Eshoo (D-CA) and Neal Dunn (R-FL), but it also remains in committee.

Regulation of Robocalls

States have also considered new bills to regulate robocalls, but none of these have been enacted into law. For example, California proposed a disclosure requirement for the use of an “artificial voice” in automatically dialed calls. Several other states proposed laws specifically targeting the elections context. Mississippi would have required disclosure for any election-related call “generated in whole or substantially by artificial intelligence,” and North Carolina would have required disclosure of any “political advertisement” generated at least partially by artificial intelligence and “sent by email, text, automated calling.” In contrast, West Virginia proposed a bill that would have required prior written consent for calls involving an “artificial voice message” with an exemption for solicitations made for political purposes.

Guidance for AI Use in Election Administration

Despite the fact that many election officials would like government guidance on whether and how to use AI in election administration, we have seen no state legislators introduce a bill that would do so. This is in contrast to action at the federal level, where Sens. Amy Klobuchar (D-MN) and Susan Collins (R-ME) introduced a bipartisan bill that would require the U.S. Election Assistance Commission to create voluntary guidelines for the use of AI in election administration in federal elections. The bill was unanimously voted out of the Senate Rules Committee.

Restricting the Use of AI in Voter Challenges or Purges

With the introduction of tools like EagleAI, many advocates, including the Brennan Center, have expressed concern that actors seeking to disenfranchise voters will use AI-supported systems to file frivolous voter eligibility challenges or seek to purge large numbers of eligible voters from the voter rolls. Given the unreliability of these tools, the Brennan Center has recommended that states regulate the use of AI to challenge voters and purge voter rolls to ensure that their use does not result in the disenfranchisement of eligible voters. We have not found any bills introduced this year that would do this.

Potential Use in Redistricting

A number of experts have written about how AI could be used to create fairer political maps, or alternately, supercharge gerrymandering. While we have not yet seen bills that explicitly promote or ban the use of AI in redistricting, there are at least two bills introduced last year that may have implicated the use of AI in redistricting if passed. A bill in Pennsylvania would have commissioned a study on whether to incorporate the use of computational redistricting to make redistricting fairer and limit gerrymandering. In Georgia, legislation was introduced to propose an independent commission for congressional and state legislative redistricting. Among its provisions was a requirement for transparency about whether and how algorithms are used in the redistricting process. These proposals appear to be consistent with other recent redistricting reforms that use computational models to generate large numbers of potential districting maps, which can then be compared and scored along various metrics.

Bills Indirectly Related to Elections and Democracy

Aside from bills to specifically safeguard elections from the threats posed by AI, many states are considering legislation that is likely to have an indirect impact on American elections and democracy. Among the bills that would have the greatest impact are those that would allow or require social media users to verify their identity; require watermarking of AI-generated materials; ban deepfakes used for an “unlawful purpose,” or of real people without their permission; and bolster protections for user privacy.

While none of these bills explicitly mention elections, and we do not endorse the constitutionality or potential effectiveness of any of them, all could have an impact on how AI — and in particular AI-fueled disinformation — could be used to disrupt elections.

One California bill seeks to fight anonymous spreaders of online disinformation by requiring social media platforms to seek escalating levels of identifying information from their users with the largest reach. Users that fail to provide identifying information are not banned but are labeled “unauthenticated,” permitting anonymous speech while providing users information about which influential users will stand by their statements. While not limited to bots and trolls that foreign and domestic actors create to spread election and political misinformation, it would obviously impact such actors.

Meanwhile, another California bill would require generative AI companies to embed indiscernible, permanent watermarks into content developed by AI systems. Modeled off upcoming rules in the European Union’s AI Act, such watermarks would identify images, video, and audio created with the use of generative AI, their creator, and the date they were created. Again, while this requirement would not be limited to AI-created content that impacts elections and political communications, it would provide platforms and others with the tools to identify AI-generated content used for those purposes.

Likewise, the Oklahoma Artificial Intelligence Bill of Rights would address watermarking, stating citizens are entitled to “rely on a watermark or some other form of content credentials to verify the authenticity of creative product they generate or consume.”

New Jersey is considering a bill to outlaw the use of deepfakes in certain circumstances. The bill does not specifically address elections but rather would outlaw the creation and dissemination of deceptive media if it was created for the purpose of committing a “crime or offense,” including harassment, cyber harassment, threats, and false incrimination. The category of communications impacted could be quite large, but it could almost certainly cover deepfakes used to harass or defame election administrators and workers.

Other states have gone much broader in attempting to ban any deepfake of a real person without their permission. Tennessee recently enacted the Ensuring Likeness, Voice, and Image Security Act asserting that all individuals have a property right to their own likeness and voice. Other states, including Illinois and South Carolina, are considering similar legislation. Although these bills do not call out elections or official proceedings, they would almost certainly cover politicians, election administrators, and election workers.

Apart from legislation surrounding AI-generated content, a number of states have considered or passed legislation that would make it more difficult for companies to collect and sell user information without their consent. Minnesota considered and New Hampshire enacted legislation that would allow users to opt out of profiling and require data protection assessments. Such legislation could make executing targeted misinformation and disinformation campaigns, including around elections, more difficult for bad actors, both foreign and domestic.

Conclusion

The speed at which states around the country have adopted new laws to address the use of AI in political communications, including political advertisements, is nothing short of extraordinary. No doubt, as states and private individuals seek to enforce these new statutes over the coming months, we will learn something about their effectiveness and whether they can survive legal challenges that will almost certainly follow their implementation.

But as impressive as the passage of new legislation in this space is, and as much as there is to learn about how these new laws may impact the use of AI in political communications, legislatures and regulators around the country have more work to do. In particular, they must start grappling with other challenges that AI poses to our democracy and what guardrails can be put in place to address them.

AI is already being used to suppress and intimidate voters and to falsely allege election fraud. Election officials report being approached by vendors to use generative AI in their work, but they generally do not have any state or federal guidance for how to safely do so. Several experts have warned that AI could be used to further supercharge partisan gerrymandering of legislative districts. These and other challenges need thorough examination and potential state legislation. So far, this discussion has not happened in most state legislatures around the country.