Skip Navigation
Expert Brief

Tech Companies Pledged to Protect Elections from AI — Here’s How They Did

The industry’s voluntary commitments are no substitute for enforceable regulations and external oversight.

View the entire AI and Elections collection

One year ago this week, 27 artificial intelligence companies and social media platforms signed an accord that highlighted how AI-generated disinformation could undermine elections around the world. The signers at a security conference in Munich included Google, Meta, Microsoft, OpenAI, and TikTok. They acknowledged the dangers, stating, “The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes.”

The signatories agreed to eight commitments to mitigate the risks that generative AI poses to elections. Companies pledged to:

  1. Develop technology to prevent creation of deceptive AI election content: Build or deploy tools like watermarking, metadata tagging, and AI-generated content classifiers to verify authenticity.
  2. Assess AI models for election-related risks: Evaluate vulnerabilities in their AI models to prevent misuse in election disinformation.
  3. Detect deceptive AI election content on platforms: Enhance automated detection, content moderation, and interoperable identifiers to differentiate AI-generated content from authentic media.
  4. Respond effectively to deceptive AI election content: Label AI-generated content, provide context, and enforce policies to limit the spread of election-related disinformation.
  5. Collaborate across the industry to counter AI-driven election risks: Share best practices, detection tools, and technical signals to strengthen collective defenses.
  6. Increase transparency in AI election policies: Publish policies, share research updates, and inform the public about their actions to combat deceptive AI content.
  7. Engage with civil society and experts: Work with civil society organizations, academics, and subject matter experts to refine their strategies and stay ahead of emerging threats.
  8. Educate the public on AI-generated election content: Support media literacy initiatives, provide content verification tools, and develop open-source resources to help voters recognize AI-driven disinformation.

This analysis assesses how the companies followed through on their commitments, based on their own reporting. At the time the accord was signed, the companies involved received positive attention for promising to act to ensure that their products would not interfere with elections. While the Brennan Center, too, praised these companies for the accord, we also asked how the public should gauge whether the commitments were anything more than PR window-dressing.

Read the Brennan Center’s Agenda to Strengthen Democracy in the Age of AI >>

Companies had multiple opportunities to report on their progress over the past year, including through updates on the accord’s official website, responses to a formal inquiry from then-Senate Intelligence Committee Chair Mark Warner (D-VA), and direct requests from the Brennan Center.

Several signatories to the accord took advantage of these opportunities, providing evidence that they made progress on their commitments. To the extent these responses have been substantive, they provide important voluntary accountability to the public at a time when many governments have failed to require it. Hopefully, these reported actions will create momentum within the tech industry to build stronger safeguards against the threats that unregulated AI poses to elections around the world — but that is not nearly enough.

While the accord was a positive step, our analysis shows that many signatories did not report their progress despite being provided multiple forums to do so — and despite “transparency to the public” being one of the accord’s pledges. Several failed to report any progress, and those that did often left out key commitments when assessing their own performance. Even when companies did report on their progress, several failed to provide much detail to back up their assertions.

This highlights a fundamental issue with the AI Elections Accord: A lack of compulsory progress on reporting requirements allowed many companies to claim the public goodwill that came with signing the accord without having to account for their follow-through.

Even where companies did report substantial progress in meeting their commitments, it is often impossible for independent observers like the Brennan Center to verify that progress. This is the result of a critical flaw in the accord itself. Specifically, civil society groups who could have been partners in the accord were not included as participants, and the accord provided no opportunity for these groups or other independent entities to audit claims of subsequent compliance made by the signers. Furthermore, the lack of any agreed-upon metrics for detailing progress meant the information companies eventually provided on their compliance varied widely and often did not include information that would allow an outside observer to judge the substance of their efforts.

Some might question why this matters given that there is no direct evidence that AI-generated disinformation changed the outcome of any major elections. But focusing only on what is immediately measurable ignores the deeper threat to democracy that AI can pose. 2024 showed how AI is already eroding the information ecosystem, from deepfake robocalls impersonating candidates to chatbot-driven misinformation campaigns.

While worst-case scenarios did not materialize, the AI threat is far from overstated. It is very likely that the election interference tactics we witnessed in 2024 are just the beginning. As AI advances, disinformation will become harder to track and counteract. The longer we wait, the more entrenched these tactics will become. Companies must act now before the damage is beyond repair.

Our Analysis

This brief evaluates the degree to which companies met their AI Elections Accord commitments to address AI-generated election disinformation, as self-reported by the companies. Our analysis draws from three reporting opportunities available to signatory companies since signing the accord:

  • In May 2024, Sen. Mark Warner sent a written request to signatories asking for details on their implementation efforts.
  • At an unknown date, the AI Elections Accord gave signatories an opportunity to submit a progress update for publication on the accord’s website in September 2024.
  • In December 2024, the Brennan Center requested additional information from all 27 signatories beyond what was available on the accord’s website or provided in response to Warner’s letter. Twelve companies responded.

For each of the three reporting opportunities, we evaluated a company’s claims as they applied to the eight commitments of the accord. We gave each company a rating of “commitment met per self-report,” “demonstrated partial satisfaction of commitment per self-report,” or “failure to report/no progress.” Our analysis evaluated company reports by the substance and breadth of their claims, considering only the information they explicitly provided in their responses. We accepted each company’s claims as reported without independently verifying their accuracy or assessing their stated impact metrics.

To ensure fairness, we introduced a not applicable or N/A evaluation for companies with limited product scope that could not be reasonably considered for some specific commitments. For example, our analysis did not expect ARM, a company that designs computer processors, to implement content policies on deceptive AI-generated content. As a result, ARM was marked as N/A for related commitments. A more complete explanation of our methodology, with copies of the evidence we reviewed, can be found in the appendix.

Findings

The following sections take a closer look at reporting trends, assessing how companies followed through on their commitments and where they fell short.

Lack of Participation

We began our analysis by indexing where companies submitted reports. We observed whether a submission was presented by each signatory as well as what portion of the eight total commitments were addressed in the reporting opportunity. 

 

Companies’ disclosure of their progress was suboptimal. All told, 85 percent of companies provided some information to the senator. However, less than half of the companies provided a progress report for the accord website or responded to the Brennan Center’s request for information. Additionally, most of the responding companies failed to provide information on their efforts to meet all relevant commitments. This pattern underscores a broader challenge — despite their public commitments, many companies did not demonstrate that they followed through on their promises.

Even among the companies that did respond, many failed to provide information on their efforts to meet all relevant commitments. Across the three reporting opportunities, companies addressed an average of only 66.4 percent of the total eight commitments, effectively overlooking 2.5 commitments. Even in open-ended response formats, such as in the replies to Warner, companies addressed only 75 percent of their commitments on average, leaving critical gaps in their reporting.

To be sure, even limited participation in a voluntary commitment like this represents a step in the right direction. One of the benefits of these reporting opportunities is that they provide a platform for engaged companies to display their work. Several companies stood out for their responsiveness, with ElevenLabs, Google, LG AI Research, Microsoft, OpenAI, and TikTok all submitting responses across all three information requests.

Lack of Specificity

We further analyzed the quality and completeness of each company’s submitted responses. By examining claims made across the three reporting opportunities and comparing them against the accord’s eight commitments, we evaluated the extent to which companies provided substantive detail. Using criteria that considered both the breadth of reported actions and the specificity of their implementation, we assigned evaluations to measure completeness.

For this analysis, we developed composite scores for each company by selecting the highest evaluations from across all three reporting opportunities and consolidating them into a single set of scores in the table below. This composite score reflects each company’s most comprehensive reporting efforts, providing a clearer picture of their engagement and transparency.

 

The composite scores reveal several patterns. Some of the biggest tech companies, including Google, Microsoft, OpenAI, and TikTok, reported progress across multiple commitments. At the same time, a substantial portion of commitments for all companies were marked in yellow, indicating that while companies made some effort to meet their obligations, many reports lacked the specificity or evidence needed to fully verify their progress. A strong example of effective reporting is Google’s response to Warner, in which the company detailed its investment in media literacy, its creation of election information panels, and its €25 million contribution to the Global Fact Check Fund.

At the same time, many of the companies’ reports lacked the specificity or evidence needed to fully verify their progress. For example, the social media platform X’s response to Warner addressed only a portion of its commitments related to AI-generated content detection. It stated that “safety teams remain alert to any attempt to manipulate the platform by bad actors and networks,” but it failed to outline specific mechanisms or actions taken to uphold this commitment. Similarly, Stability.ai, in its response to the Brennan Center, referenced its membership in the Content Authenticity Initiative as evidence of fostering cross-industry resilience but did not detail any concrete initiatives or contributions.

These generalized statements, which acknowledged commitments without demonstrating clear follow-through, contributed to the high number of “partially met” ratings across this analysis. Without clear evidence, it is impossible to assess whether these commitments were met through action or merely acknowledged in principle.

Conclusion

The AI Elections Accord provided a framework for companies to recognize AI-related election risks and commit to mitigation efforts. While some signatories shared evidence of their actions, inconsistent follow-through, vague reporting, and a lack of independent verification made it challenging to assess real progress. Voluntary commitments can encourage responsible practices, but they are not a substitute for enforceable regulation. Without oversight, companies set their own standards for compliance, making public accountability difficult. Congress and state legislatures have taken initial steps, such as passing laws on deepfakes and political communications, but broader action is needed. Policymakers should establish comprehensive transparency requirements, enforcement mechanisms, and safeguards to address the full scope of AI-related election risks. 

Read the Brennan Center’s Agenda to Strengthen Democracy in the Age of AI >>

That said, there are ways to improve any future accords on AI and elections:

  • Include rigorous reporting and verification requirements: Accords should have more detailed guidance on what to report and ways for civil society groups and the public to verify compliance. Stronger standards will ensure these commitments lead to tangible improvements rather than symbolic gestures.
  • Create an Independent Audit Mechanism: Signatories should pay into a common pool of funds to support independent audits of their compliance efforts. Auditors should be supervised by a board of independent experts from academia and civil society.
  • Enhance Civil Society Engagement: Nonprofits, advocacy groups, and researchers should be actively engaged in the process of creating accords like this one and regularly evaluating company practices to ensure they align with public commitments.
  • Increase Public Oversight: Journalists, academics, and the public should monitor company actions and assess whether they follow through on their commitments. Greater scrutiny can raise the reputational cost of noncompliance and generate momentum for stronger regulations.

Industry cooperation is valuable, but meaningful action requires clear guidelines, transparency, and external enforcement. Civil society must play a role in holding tech companies accountable, and policymakers must establish safeguards to ensure AI does not undermine election integrity. Stronger oversight is essential to moving beyond symbolic commitments and creating real accountability.