Skip Navigation
Analysis

Gauging the AI Threat to Free and Fair Elections

Artificial intelligence didn’t disrupt the 2024 election, but the effects are likely to be greater in the future. 

AI
Imagine Photographer/Getty
View the entire AI and Elections collection

The run-up to the 2024 election was marked by predictions that artificial intelligence could trigger dramatic disruptions. The worst-case scenarios — such as AI-assisted large-scale disinformation campaigns and attacks on election infrastructure — did not come to pass. However, the rise of AI-generated deepfake videos, images, and audio misrepresenting political candidates and events is already influencing the information ecosystem.

Over time, the misuse of these tools is eroding public trust in elections by making it harder to distinguish fact from fiction, intensifying polarization, and undermining confidence in democratic institutions. Understanding and addressing the threats that AI poses requires us to consider both its immediate effects on U.S. elections and its broader, long-term implications.

Incidents such as robocalls to primary voters in New Hampshire that featured an AI-generated impersonation of President Biden urging them not to vote captured widespread attention, as did misinformation campaigns orchestrated by chatbots like the social media platform X’s Grok. Russian operatives created AI-generated deepfakes of Vice President Kamala Harris, including a widely circulated video that falsely portrayed her as making inflammatory remarks, which was shared by tech billionaire Elon Musk on X. Separately, a former Palm Beach County deputy sheriff, now operating from Russia, collaborated in producing and disseminating fabricated videos, including one falsely accusing vice-presidential nominee Minnesota Gov. Tim Walz of assault.

Similar stories emerged around elections worldwide. In India’s 2024 general elections, AI-generated deepfakes that showed celebrities criticizing Prime Minister Narendra Modi and endorsing opposition parties went viral on platforms such as WhatsApp and YouTube. During Brazil’s 2022 presidential election, deepfakes and bots were used to spread false political narratives on platforms including WhatsApp. While no direct, quantifiable impact on election outcomes has been identified, these incidents highlight the growing role of AI in shaping political discourse. The spread of deepfakes and automated disinformation can erode trust, reinforce political divisions, and influence voter perceptions. These dynamics, while difficult to measure, could have significant implications for democracy as AI-generated content becomes more sophisticated and pervasive.

The long-term consequences of AI-driven disinformation go beyond eroding trust — they create a landscape where truth itself becomes contested. As deepfakes and manipulated content grow more sophisticated, bad actors can exploit the confusion, dismissing real evidence as fake and muddying public discourse. This phenomenon, sometimes called the liar’s dividend, enables anyone — politicians, corporations, or other influential figures — to evade accountability by casting doubt on authentic evidence. Over time, this uncertainty weakens democratic institutions, fuels disengagement, and makes societies more vulnerable to manipulation, both from domestic actors and foreign adversaries

The growing risks highlight the urgent need for greater transparency and accountability. Social media platforms and AI developers must implement measures to disclose the origins of AI-generated content. Watermarking and other tools that establish content provenance could help voters discern authentic information from manipulated media. Additionally, platforms should reinvest in trust and safety teams, many of which have been significantly downsized leaving gaps in oversight that bad actors are eager to exploit.

Yet the challenges extend beyond public-facing platforms. Encrypted platforms such as WhatsApp and Telegram — which a growing number of people now turn to for their news — add another layer of complexity, as their design limits oversight. The rapid spread of AI-generated disinformation through these channels mirrors lessons from past elections, such as the 2016 presidential race, where a lack of appropriate oversight and transparency meant that the extent of foreign interference in the election became apparent only years later.

At the heart of this issue lies a deeper question: How do we preserve the integrity of democratic systems in an era of rapid technological change? Safeguarding elections will require a multipronged approach, including legislative action to mandate transparency, voter education campaigns, and collaboration between technology companies, policymakers, and civil society organizations. It is not enough to react to threats as they arise — we must proactively address the systemic vulnerabilities that allow AI-driven interference to flourish.

One potential solution is creating ethical guidelines for AI developers, modeled after standards in industries such as health care and finance. For instance, the health care sector has long relied on protocols to prioritize patient safety, while the financial industry enforces regulations to prevent fraud and manage systemic risks. Though not without flaws, these frameworks provide a foundation for accountability and harm mitigation. Similarly, ethical guidelines for AI could include requirements to mitigate risks in politically sensitive contexts, such as clear labeling of AI-generated political ads and deepfakes to enhance transparency and trust. Platforms that host or disseminate deepfakes should also be held to these standards through regulation.

From deepfakes targeting high-ranking officials to misinformation campaigns designed to manipulate voters, AI-driven disinformation has exposed critical vulnerabilities in our democratic system. Addressing these threats requires more than reactive measures, it demands a coordinated, urgent response. Social media platforms, AI developers, and policymakers must act now to implement transparency requirements, strengthen trust and safety protections, and establish accountability mechanisms for AI-generated content. Without decisive action, AI-fueled deception could become an enduring feature of political campaigns, eroding the very foundation of democratic governance. The integrity of elections depends on recognizing this challenge and confronting it before it becomes an irreversible norm.