One year ago this week, 27 artificial intelligence companies and social media platforms signed an accord that highlighted how AI-generated disinformation could undermine elections around the world. The signers at a security conference in Munich included Google, Meta, Microsoft, OpenAI, and TikTok. They acknowledged the dangers, stating, “The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes.”
The signatories agreed to eight commitments to mitigate the risks that generative AI poses to elections. Companies pledged to:
- Develop technology to prevent creation of deceptive AI election content: Build or deploy tools like watermarking, metadata tagging, and AI-generated content classifiers to verify authenticity.
- Assess AI models for election-related risks: Evaluate vulnerabilities in their AI models to prevent misuse in election disinformation.
- Detect deceptive AI election content on platforms: Enhance automated detection, content moderation, and interoperable identifiers to differentiate AI-generated content from authentic media.
- Respond effectively to deceptive AI election content: Label AI-generated content, provide context, and enforce policies to limit the spread of election-related disinformation.
- Collaborate across the industry to counter AI-driven election risks: Share best practices, detection tools, and technical signals to strengthen collective defenses.
- Increase transparency in AI election policies: Publish policies, share research updates, and inform the public about their actions to combat deceptive AI content.
- Engage with civil society and experts: Work with civil society organizations, academics, and subject matter experts to refine their strategies and stay ahead of emerging threats.
- Educate the public on AI-generated election content: Support media literacy initiatives, provide content verification tools, and develop open-source resources to help voters recognize AI-driven disinformation.
This analysis assesses how the companies followed through on their commitments, based on their own reporting. At the time the accord was signed, the companies involved received positive attention for promising to act to ensure that their products would not interfere with elections. While the Brennan Center, too, praised these companies for the accord, we also asked how the public should gauge whether the commitments were anything more than PR window-dressing.
Read the Brennan Center’s Agenda to Strengthen Democracy in the Age of AI >>
Companies had multiple opportunities to report on their progress over the past year, including through updates on the accord’s official website, responses to a formal inquiry from then-Senate Intelligence Committee Chair Mark Warner (D-VA), and direct requests from the Brennan Center.
Several signatories to the accord took advantage of these opportunities, providing evidence that they made progress on their commitments. To the extent these responses have been substantive, they provide important voluntary accountability to the public at a time when many governments have failed to require it. Hopefully, these reported actions will create momentum within the tech industry to build stronger safeguards against the threats that unregulated AI poses to elections around the world — but that is not nearly enough.
While the accord was a positive step, our analysis shows that many signatories did not report their progress despite being provided multiple forums to do so — and despite “transparency to the public” being one of the accord’s pledges. Several failed to report any progress, and those that did often left out key commitments when assessing their own performance. Even when companies did report on their progress, several failed to provide much detail to back up their assertions.
This highlights a fundamental issue with the AI Elections Accord: A lack of compulsory progress on reporting requirements allowed many companies to claim the public goodwill that came with signing the accord without having to account for their follow-through.
Even where companies did report substantial progress in meeting their commitments, it is often impossible for independent observers like the Brennan Center to verify that progress. This is the result of a critical flaw in the accord itself. Specifically, civil society groups who could have been partners in the accord were not included as participants, and the accord provided no opportunity for these groups or other independent entities to audit claims of subsequent compliance made by the signers. Furthermore, the lack of any agreed-upon metrics for detailing progress meant the information companies eventually provided on their compliance varied widely and often did not include information that would allow an outside observer to judge the substance of their efforts.
Some might question why this matters given that there is no direct evidence that AI-generated disinformation changed the outcome of any major elections. But focusing only on what is immediately measurable ignores the deeper threat to democracy that AI can pose. 2024 showed how AI is already eroding the information ecosystem, from deepfake robocalls impersonating candidates to chatbot-driven misinformation campaigns.
While worst-case scenarios did not materialize, the AI threat is far from overstated. It is very likely that the election interference tactics we witnessed in 2024 are just the beginning. As AI advances, disinformation will become harder to track and counteract. The longer we wait, the more entrenched these tactics will become. Companies must act now before the damage is beyond repair.