Skip Navigation
Policy Solution

An Agenda to Strengthen U.S. Democracy in the Age of AI

Federal and state policymakers must rise to the challenge artificial intelligence poses to safe, secure elections and responsive, accountable governance.

Illustration of circuitry overlayed on a map of the Unites States.
Rob Dobi
View the entire AI and Elections collection

The year 2024 began with bold predictions about how the United States would see its first artificial intelligence (AI) election. footnote1_O5CC-daCp6jEzYzJYsIvYSpw-mMGtRbzBklbkdZWRog_ibkwBOw3wcKg1Galen Druke, “2024 Is the 1st ‘AI Election.’ What Does That Mean?,” ABC News, December 1, 2023, https://abcnews.go.com/538/2024–1st-ai-election/story?id=105312571. AI refers to an expansive category of computer systems that leverage data, algorithms, and computational power to process information in ways that once only human intelligence could. Traditional AI tools can accomplish tasks like recognizing speech, identifying patterns in data, and making predictions. AI is now used in everyday applications from TV, film, and digital video recommendations to facial recognition for airport security to driving cars. The Organisation for Economic Co-operation and Development (OECD) has defined an AI system as a “machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.” This report adopts that definition. See Organisation for Economic Cooperation and Development, “Explanatory Memorandum on the Updated OECD Definition of an AI System,” OECD Artificial Intelligence Development Papers, no. 8, March 2024, https://doi.org/10.1787/623da898-en. Commentators worried that generative AI — a branch of AI that can create new images, audio, video, and text — could produce deepfakes that would so inundate users of social media that they would be unable to separate truth from fiction when making voting decisions. footnote2_nACRrNPF4l-kX4GJGYxKEcbZfQZnri9pNHWyeL-FSYA_rFxIFPLND5sr2Emma Folts, “Voters: Here’s How to Spot AI ‘Deepfakes’ That Spread Election-Related Misinformation,” Heinz College, Carnegie Mellon University, October 18, 2024, https://www.heinz.cmu.edu/media/2024/October/voters-heres-how-to-spot-ai-deepfakes-that-spread-election-related-misinformation1; and Mike Wereschagin, “AI-Powered Deepfakes Threaten ‘Chaos in the System’ in Historic Election Year,” Pittsburgh Post-Gazette, February 4, 2024, https://www.post-gazette.com/news/election-2024/2024/02/04/ai-deepfakes-fcc-chaos-presidential-election/stories/202402040088. Meanwhile, some self-labeled techno-optimists proselytized how AI could revolutionize voter outreach and fundraising, thereby leveling the playing field for campaigns that otherwise could not afford expensive political consultants and staff. footnote3_xW1a-j0i4agb-Gs-phfB4PuDPbDcIgdgl2ZmEEQwydw_nqp8Dl8yrAMe3Zelly Martin et al., Political Machines: Understanding the Role of Generative AI in the U.S. 2024 Elections and Beyond, Center for Media Engagement, University of Texas at Austin, June 6, 2024, https://mediaengagement.org/research/generative-ai-elections-and-beyond; Marc Andreesen, “The Techno-Optimist Manifesto,” Andreessen Horowitz, October 16, 2023, https://a16z.com/the-techno-optimist-manifesto; and Dave Karpf, “Parsing the Political Project of Techno-Optimism,” Tech Policy Press, December 19, 2023, https://www.techpolicy.press/parsing-the-political-project-of-techno-optimism.

As the election played out, AI was employed in numerous ways: Foreign adversaries used the technology to augment their election interference by creating copycat news sites filled with what appeared to be AI-generated fake stories. footnote4_chSP8W11ZKrMH6tBZdP-f7NDoULeDJ0B9zMD01CyfKA_pl09CsqKUj6j4Juliana Kim, “Microsoft Detects Fake News Sites Linked to Iran Aimed at Meddling in U.S. Election,” NPR, August 9, 2024, https://www.npr.org/2024/08/09/nx-s1–5069317/iran-interfere-presidential-election-microsoft-report; and Darren Linvill and Patrick Warren, “New Russian Disinformation Campaigns Prove the Past Is Prequel,” Lawfare, January 22, 2024, https://www.lawfaremedia.org/article/new-russian-disinformation-campaigns-prove-the-past-is-prequel. Campaigns leveraged deepfake technology to convincingly imitate politicians and produce misleading advertisements. footnote5_fCp7r5u8R3wOMJ8cDKaLGw44KS2hMx2qcJ3kUx2Fl0g_egeOKU5commB5Shanze Hasan, “The Effect of AI on Elections Around the World and What to Do About It,” Brennan Center for Justice, June 6, 2024, https://www.brennancenter.org/our-work/analysis-opinion/effect-ai-elections-around-world-and-what-do-about-it. Activists deployed AI systems to support voter suppression efforts. footnote6_eyE7hUvmXRjtsriyyaEiWxMPkmpQJoxYVIpF4ODIfaM_hM9SMNuEE5gX6Jane C. Timm, “Inside the Right’s Effort to Build a Voter Fraud Hunting Tool,” NBC News, August 17, 2023, https://www.nbcnews.com/politics/2024-election/conservatives-voter-fraud-hunting-tool-eagleai-cleta-mitchell-rcna97327. Candidates and supporters used AI tools to build political bot networks, translate materials, design eye-catching memes, and assist in voter outreach. footnote7_daoPlNihrV-9TFSp0FZREdV85OaQUo0xTJHqVNJ1EE_wMP7yOSGxudh7Maria Papageorgiou, “Social Media, Disinformation, and AI: Transforming the Landscape of the 2024 U.S. Presidential Political Campaigns,” The SAIS Review of International Affairs, January 14, 2025, https://saisreview.sais.jhu.edu/social-media-disinformation-and-ai-transforming-the-landscape-of-the-2024-u-s-presidential-political-campaigns/. And election officials experimented with AI to draft social media content and provide voters with important information like polling locations and hours of operation. footnote8_hkkGc4h1sPyjnGONWwXMNzXVeSeoxyRXA5yYiF4e-Q_qfUgbkOstnU48Dean Jackson, Matthew Weil, and William T. Adler, Preparing for Artificial Intelligence and Other Challenges to Election Administration: Results from Tabletop Exercises in Five States During the 2024 Election, Bipartisan Policy Center, October 2024, https://bipartisanpolicy.org/report/preparing-for-artificial-intelligence-and-other-challenges-to-election-administration. Of course, AI likely was also used during this election in ways that have not yet come into focus and may only be revealed months or even years from now.

Were the fears and promises overhyped? Yes and no. It would be a stretch to claim that AI transformed U.S. elections last year to either effect, and the worst-case scenarios did not come to pass. footnote9_L-rrcS7g4O2O6ck1Zfl7k9vJV6K3E51uag6aCULbI1g_eXCeFeLIs70d9Nick Bilton, “Dizzying Deepfakes and Personalized Propaganda: Welcome to the AI Election,” Vanity Fair, September 6, 2024, https://www.vanityfair.com/news/story/welcome-to-the-ai-election; Charlotte Hu, “How AI Bots Could Sabotage 2024 Elections Around the World,” Scientific American, February 13, 2024, https://www.scientificamerican.com/article/how-ai-bots-could-sabotage-2024-elections-around-the-world; Nathan E. Sanders and Bruce Schneier, “AI Could Still Wreck the Presidential Election,” The Atlantic, September 24, 2024, https://www.theatlantic.com/technology/archive/2024/09/ai-election-ads-regulation/680010; Thalia Khan, “What Role Did AI Play in the 2024 U.S. Election?,” Partnership on AI, November 4, 2024, https://partnershiponai.org/what-role-did-ai-play-in-the-2024-u-s-election; and “The First ‘AI Elections’ Weren’t as Disastrous as Predicted. Here’s Why,” Fast Company,December 4, 2024, https://www.fastcompany.com/91239055/ai-2024-elections-politics. But AI did play a role that few could have imagined a mere two years ago, and a review of that role offers some important clues as to how, as the technology becomes even more sophisticated and widely adopted, AI could alter U.S. elections — and American democracy more broadly — in the coming years.

AI promises to transform how government interacts with and represents its citizens, and how government understands and interprets the will of its people. footnote10_SEXdtPBgl2mybkAdzotljkTAyJUH8bIPKwI9eVokVZk_p2AtAitrMqUY10Bruce Schneier, “How AI Will Change Democracy,” Schneier on Security (blog), May 31, 2024, https://www.schneier.com/blog/archives/2024/05/how-ai-will-change-democracy.html. Revelations that emerge about AI’s applications in 2024 can offer lessons about the guardrails and incentives that must be put in place now — lest even more advanced iterations of the technology be allowed to wreak irreversible havoc on U.S. elections and democratic governance as a whole. This report lays out the Brennan Center’s vision for how policymakers can ensure that AI’s inevitable changes strengthen rather than weaken the open, responsive, accountable, and representative democracy that all Americans deserve.

Now is the time for policymakers at all levels to think deliberately and expansively about how to minimize AI’s dangers and increase its pro-democracy potential. That means more than just passing new laws and regulations that relate directly to election operations. It also includes holding AI developers and tech companies accountable for their products’ capacities to influence how people perceive facts and investing in the resources (including workforces and tools) and audit regimes that will make it more difficult for antagonists to use AI to mislead and disenfranchise voters. Policymakers should also establish guardrails for election officials and other public servants that allow them to use AI in ways that improve efficiency, responsiveness, and accountability while not inadvertently falling prey to the technology’s pitfalls.

Whether and to what extent Congress and Donald Trump’s administration will prioritize regulating AI remains to be seen. This report provides the following recommendations for both federal and state policymakers, but it is clear that states have a major role to play in 2025 and beyond in strengthening America’s democracy in the AI age.

Government Capacity

Governments at all levels — local, state, and federal — must strengthen their capacity to confront the impacts of AI. State and local governments should establish advisory councils to obtain a baseline understanding of AI risks and opportunities to better serve the public. Multiple states have created such entities to help state and local governments to determine whether and how to integrate AI into their operations, though many more have not yet taken such steps.

Federal, state, and local lawmakers should also train staff to use AI appropriately and secure sufficient funding to support safe and responsible AI use. Adequate resources are required to hire and retain top technical and nontechnical AI talent — including computer scientists, cybersecurity professionals, AI risk management experts, and privacy and legal officers — that might otherwise be drawn to more lucrative opportunities in the private sector. These personnel are essential to ensuring that government departments, offices, and agencies can deploy AI with appropriate safeguards.

Transparency Requirements

Congress and state legislatures should require greater transparency for AI-created and curated election content. Lawmakers should require AI developers, social media platforms, and search engines to publish information on AI-generated election content and AI-assisted design features. Requirements should include information concerning the volume of political deepfakes present on or produced by platforms and tools, implementation of watermarks and content provenance standards, and policies pertaining to responsible dissemination of AI-generated election content.

Congress and the states should also require transparency around generative AI tools’ training data. To counter the risk that AI-generated content will be used, among other things, to mislead voters with highly personalized and false information about elections, lawmakers should require generative AI developers to publicly disclose the sources of their original training data sets and of any training data sets under their control used to customize AI systems for particular uses.

Data Safeguards and Corporate Accountability

Congress and state legislatures should ensure that AI developers and system providers can be held liable for harms that their products cause. Congress should explore clarifying that Section 230 liability immunities — which generally shield online platforms from being held liable for content posted by their users under the 1996 Communications Decency Act — do not apply to generative AI developers and deployers. footnote11_uIwKiNz91GcJJfMqZfnNDBPmqyYXUtfoqnAfW9Khu0_fIPEF022BfSU11Communications Decency Act of 1996, 47 U.S.C. § 230. Federal and state lawmakers should also pass laws that make it easier to sue AI developers by requiring them to exercise reasonable care to prevent certain foreseeable harms to voters and the election process.

Additionally, Congress and the states should pass new data privacy protections. Legislatures should regulate generative AI models’ collection, use, and processing of personal data to (among other potential concerns) mitigate malefactors’ ability to use AI tools to manipulate or intimidate voters or election officials with such data. Such protections could include, at a minimum, limiting personal data collection to use for authorized purposes and giving users the power to opt in to collection of personal data through their interactions with AI models and to companies’ ability to sell or release data to third parties — granting users greater autonomy over their personal data and empowering them to make informed decisions about how their information is collected and used.

Civic Participation Protections

Congress and the states should authorize government agencies to disregard misattributed comments on proposed regulations. The federal Administrative Procedure Act and state analogues typically require agencies to solicit, consider, and respond to public comments on proposed regulations. footnote12_eTGzzTLYPfvas44a-nh42WkbbM8uMhCB751iEXjL9b8_xM6adUblarW212Administrative Procedure Act of 1946, 5 U.S.C. §§ 551–559. Lawmakers should update these statutes to allow agencies to disregard comments that falsely impersonate others, are transmitted via bots, or are otherwise incorrectly attributed — all of which will become easier with the assistance of generative AI — and to safeguard consideration of authentic submissions.

Federal and state governing bodies should also expand opportunities for real constituents to offer policy input, including by providing ample avenues for public comment that are less vulnerable to technological manipulation, such as surveys built into the process of public benefits administration and those that rely on address-based recruitment, as well as in-person events and town halls.

In addition, federal lawmakers and state agencies should establish guardrails for responsibly using AI to solicit and respond to constituent feedback and questions. Although AI holds significant potential to enhance government’s responsiveness capacity, its use does engender risks. Federal lawmakers should clarify that the use of AI to analyze comments on proposed federal regulations — and for other highly consequential processes like soliciting input on essential government services — is a use that directly affects people’s civil rights, warranting compulsory protections, including minimum thresholds for accuracy, safeguards against harmful bias, and transparency guarantees. State agencies should impose similar requirements.

Political Communications Regulations

Congress and the states should require disclosure of deepfakes and other manipulated media in political communications. These requirements should apply to candidates, parties, and other political groups that create and disseminate visual and audio content in ads or like communications. They should cover content that is artificially generated or created or substantially modified with the assistance of digitization, such that the content would leave a reasonable viewer or listener with a significantly different understanding of the speech or events depicted than those that actually occurred. Laws should also mandate clear, easy-to-understand disclaimers informing viewers and listeners that such content has been manipulated.

Major online platforms should also be required to include such information in any public files on political ads sales that they maintain and to use state-of-the-art tools to detect and label a subset of other political content generated or substantially modified by synthetic means. Alongside mandating these disclosures, Congress and state legislatures should consider targeted prohibitions for especially harmful and deceptive election-related content.

Moreover, Congress and state legislatures should require labeling for a subset of content produced by large language models, or LLMs (such as ChatGPT and its latest successor, GPT-4, along with Google’s PaLM and Meta’s Llama), including when LLMs power interactive chatbots and social media bots deployed by candidates, parties, or other political groups. Such chatbots and social media bots should carry labels informing viewers or listeners of their artificial nature. footnote13_OvWdAdLxWU3rttAEy6wYvuJ8WitTWVD5ak-rIag_ehoWXpsWyL0G13Mekela Panditharatne, “Preparing to Fight AI-Backed Voter Suppression,” Brennan Center for Justice, April 16, 2024, https://www.brennancenter.org/our-work/research-reports/preparing-fight-ai-backed-voter-suppression.

Voter Suppression Prohibitions

Congress and the states should strengthen deceptive practices laws and bills to more thoroughly cover deceitful and intimidating AI-generated content. Lawmakers should amend laws and bills that curb the knowing and intentional dissemination of falsehoods about where, when, and how to vote so that they expressly cover AI systems and better limit risks from AI developers who might deliberately design AI tools to disenfranchise voters.

Federal and state legislators should also pass laws prohibiting the knowing and intentional dissemination of deepfakes with strong potential to suppress votes within 60 days before elections. Examples include synthetic content falsely depicting inaccessible polling places, impediments to the use of voting equipment, or election officials preventing or hindering voting.

Additionally, Congress and the Federal Communications Commission (FCC) should bolster the regulation of political robocalls that use generative AI. For one, Congress should close the loophole in robocall regulations that allows political robocalls to be made to landlines under certain conditions without prior consent. footnote14_pCZUUALiD9OMIUGiRrcso-tVqhF5G9grWukRZQaFNQ_mZ4ihoqCxsCg14Graham Wilson and Maxwell Schechter, “The FCC Did Not Ban All AI Robocalls: Political Organizations Can Implement an AI Robocall Program with Certain Restrictions,” Elias Law Group Newsroom, February 23, 2024, https://www.elias.law/newsroom/client-alerts/the-fcc-did-not-ban-all-ai-robocalls. The FCC should also complete the process it began in August 2024 to augment prior express consent rules around generative AI–powered robocalls and clarify that the strengthened requirements would apply to political robocalls made to mobile devices.

Election Security Defenses

Congress and the states should boost funding to increase defenses against cyber threats amplified by AI systems. As new AI developments elevate the risk of cyberattacks on election infrastructure, election officials need additional resources and support to implement safeguards, mitigation measures, and security best practices, including the creation of statewide cyber navigator programs to assist local jurisdictions with cybersecurity needs, replacement of outdated equipment, investment in resiliency, and training and education around AI-enhanced security threats.

Relevant federal and state agencies should invest in tools that will allow election offices to embed digital authentication markers in official content. State and federal agencies, starting with the Cybersecurity and Infrastructure Security Agency (CISA) at the federal level, should work with election offices to test and use these tools moving forward. Private-sector partners could be invaluable in developing and deploying such tools.

Congress and the states should fund election offices to educate voters about AI. State and local election officials, along with other government offices, will need to launch voter education campaigns to prepare the public for AI-related changes and challenges in the coming years.

Furthermore, Congress should require independent federal oversight of election vendor security practices. Just as election offices are likely to be targets of AI-enhanced security threats, election system vendors are also logical targets. The federal government should mandate election security best practices for vendors in the elections space — including robust system and network protections and resilience planning — as it does for vendors in other sectors whose assets, systems, and networks have been designated as critical infrastructure.

Election Administration Standards

Relevant federal and state agencies should develop guidance and baseline standards for election officials on how and when to use AI, and they should oversee the creation of an incident reporting system so election officials and others can report AI-related harms. These steps would enable election officials who choose to integrate AI into their work to do so as safely and responsibly as possible. Congress and state legislatures should allocate funds for state and local election offices to implement those guidelines and standards, as well as for the monitoring, auditing, and red-teaming (which involves controlled attempts to breach an organization’s system to uncover security vulnerabilities) that will be necessary going forward.

Federal and state lawmakers should also require audits, especially for vulnerable and high-risk election systems that utilize AI. Such systems include those used to identify potentially ineligible voters or others who might be removed from voter rolls; those used to verify voters’ identities; and those used to provide election information to voters on how to vote, where polling places are located and hours of operation, and what forms of ID might be required.

Finally, Congress and the states should regulate the most sensitive rights-affecting AI use cases in election administration. In many jurisdictions, election officials use AI-powered tools to assist in maintaining voter registration databases and to verify mail ballot signatures, both rights-affecting use cases that necessitate specific additional safeguards such as algorithmic bias testing and error rate monitoring to catch inaccuracies and help mitigate biases. These guardrails must include human involvement in reviewing AI-assisted decisions — particularly for cases flagged as high-risk — as well as regular audits and evaluations of AI systems to ensure effectiveness and compliance with baseline standards.

■ ■ ■

Several campaigns, foreign adversaries, and even some election officials experimented in significant ways with AI in 2024, but it does not currently appear that AI itself radically transformed their operations. In retrospect, that may not be particularly surprising, given how new the technology is. But the piloting seen in 2024 will almost certainly become more integrated into both attacks against and defenses of our elections and democracy in the next few years: Despite the enormous investment in AI globally since 2022, widespread adoption of AI tools across U.S. companies is not projected until the second half of the decade. footnote15_K0EATeky4n7smn2OelC6p9VnryC-OGYJVb6DCVp3W0_qOoCQDNUzlSI15“AI Is Showing ‘Very Positive’ Signs of Eventually Boosting GDP and Productivity,” Goldman Sachs, May 13, 2024, https://www.goldmansachs.com/insights/articles/AI-is-showing-very-positive-signs-of-boosting-gdp. The same is surely true of AI’s use in elections and democratic processes.

End Notes