Skip Navigation
Resource

Appendix: How Tech Companies Performed on Their Promises to Protect Elections from AI

This appendix provides details the methodology used in the Brennan Center’s analysis of the AI Elections Accord.

Published: February 13, 2025

Read the main article here.

Applying Ratings to Company Claims

We evaluated companies by the substance and completeness of their reporting claims. In our evaluations, we considered only the information they provided in their responses and any information available by visiting links to websites provided in their response. In analyzing their responses, we assumed good faith on the part of the companies and accepted their written claims as truthful and honest. We did not use outside information to independently verify the accuracy of their claims or the stated impact metrics. In future accords, we recommend setting standards for evidence to be provided by signatories and funding an independent audit mechanism (see “Recommendations” section of our analysis) to verify claims. 

A “Not Applicable” or “N/A” rating for specific commitments is assigned for one of two reasons. First, if a company explicitly stated in its responses that a commitment does not apply to its operations, we accept that statement. Second, if a company could not have reasonably fulfilled a commitment based on its product offerings, we assigned an N/A rating. For example, ARM, a company that designs computer processors, was not expected to implement content policies on deceptive AI-generated content. As a result, we marked related commitments as N/A.

In our analysis, we assigned the rating “Commitment not met or no information provided” when companies either failed to submit a report or did not provide information for a specific commitment within a submitted report. Failure to submit reports accounted for 65.5 percent of “Commitment Not Met” ratings, while failure to address a commitment within a submitted report accounted for 34.5 percent. One example of failure to address a specific commitment is Anthropic’s response to Sen. Mark Warner, which provided information on seven of the eight commitments while not addressing their commitment to educating the public, resulting in a “Commitment Not Met” evaluation. 

We assigned the rating “Demonstrated partial satisfaction of commitment per self-report” when company reporting fell somewhere between the standards of “Commitment Not Met” and “Commitment Met.” This was determined by comparing the evidence a company presented in their report against the Accord’s examples or listed actions that a company may or must take to meet the commitment (see Additional Notes on Company Submissions section for more details on example vs. listed actions). Companies received a “Demonstrated Partial Satisfaction” rating when they presented partial evidence toward meeting a specific commitment. This includes instances where a company cited broad initiatives but did not provide enough specific details or actions to garner a “Commitment Met.” One example is Google’s Progress Update, where they stated, “We also support . . . media literacy efforts to help build resilience and overall understanding around Generative AI.” Because this statement provides no claims of specific actions taken and does not cover the breadth of the suggested actions in the public education commitment, it received a “Partially Met” rating.

Lastly, we assigned “Commitment met per self-report” when a company’s claims covered the full breadth of the commitment and presented ample claims of specific actions taken. To evaluate this, we compared company claims of specific actions to the example actions listed in the Accord, granting “Commitment Met” to companies that completed most of the Accord’s example actions. One report receiving this rating is Google’s response to Warner about their commitment to public education, where they claim to provide election information panels on multiple services, invest €25 million in media literacy educational programs, and provide support for the Global Fact Check Fund. Compared to Google’s Progress Update, which was discussed in the previous paragraph, this response provides ample claims of specific and relevant actions taken. 

Additional Notes on Company Submissions

  • Zefr receives N/A rating for the Warner Letter. Warner’s office did not contact Zefr to offer a reporting opportunity. For this reason, Zefr receives a rating of N/A for all eight commitments in their Warner Letter.
  • True Media receives N/A rating for the Brennan Letter. On January 7, True Media posted on their website that they would shut down their deepfake detector and open source their technology. On January 14, they ceased operations. For this reason, True Media receives an N/A for all eight commitments in their Brennan Letter.
  • Information on LinkedIn and GitHub received through Microsoft submissions. Microsoft stated in its responses to the Brennan Center that it was responding on behalf of its subsidiaries, LinkedIn and GitHub. However, both LinkedIn and GitHub appear separately in the accord’s official progress updates. Because these companies are signed separately to the Accord, our Warner Letter and Brennan Email evaluations only consider information from any Microsoft reports where LinkedIn and GitHub are explicitly mentioned by name.
  • Language used in the Accord to reference example actions. Several commitments in the official Accord text use non-binding language such as “this might include” or “for instance.” Other commitments use binding language such as “by supporting the development” or “by sharing best practices.” Our analysis considered these differences and employed evaluations with higher scrutiny for binding language and lower scrutiny for non-binding language. 

AI Elections Accord Company Evaluations

AI Elections Accord Progress Updates

AI Elections Accord Company Responses to Sen. Mark Warner

AI Elections Accord Company Responses to Brennan Center Outreach