2024 has been dubbed the biggest election year in history, with more than 60 countries slated to choose new leaders and representatives in the coming months. Alarmingly, these elections will unfold in an information environment that’s being muddied by the rise of artificial intelligence tools. Responding to calls for proactive measures against the threat posed by these new technologies, in February, 20 of the world’s leading AI developers, social media platforms, cybersecurity firms, and other tech companies announced an accord to combat the deceptive use of AI in elections.
The eight voluntary pledges get at some of the issues that worry election observers the most, with commitments to flag deepfakes and remove deceptive content when needed. But they’re also entirely unenforceable and don’t include ways to gauge the signatories’ progress in accomplishing them. In this watershed year for democracy, companies can be much more forthcoming about how they’re guarding against the threats they helped unleash.
The announcement of the accord comes on the heels of heightened public attention to the way in which this technology could disrupt elections around the world after several high-profile misuses of generative AI, including fake audio of a top Slovakian candidate that circulated two days before the country’s election, a deepfake of the Taiwanese presidential frontrunner praising his opponents in the run-up to Taiwan’s national elections, and a deepfake Biden robocall that encouraged voters to stay home from the New Hampshire primary.
The promises made in the accord are far from everything the companies could have offered. Notably, they don’t include plans to address text as a form of AI-generated content, despite the fact that AI-supported chatbots have already been used to assist bad actors in committing fraud and spreading election misinformation in both American and foreign elections. The commitments also ring hollow from signatories such as Stability AI and Meta, who continue to freely distribute unsecured “open-source” AI systems that can provide ideal tools for election interference in the wrong hands. While OpenAI and Microsoft have reported that the governments of China, Russia, Iran, and North Korea have all used their AI systems as hacking tools, companies that release unsecured AI systems have no way of knowing the extent to which their tools are being used by bad actors bent on manipulating the democratic process.
Additionally, while it is noteworthy that the accord mentions watermarking and tracking the provenance of AI-generated content, it’s unclear how helpful these measures will be. The most widely adopted standard for marking deepfake images comes from the Coalition for Content Provenance and Authenticity (C2PA) and is trivially easy to remove in seconds. However, the accord references Google’s SynthID watermarking technology, which purports to be resilient to efforts at its removal. Hopefully, the accord, along with Google’s recent joining of the C2PA steering committee, signals Google’s readiness to share SynthID’s underlying technology with all the signers.
Even with all these caveats, if the companies that signed on to the accord actually follow through on their commitments robustly, it could make a real difference to the integrity of the information environment for elections in the United States and around the world in 2024. Among other things, it could increase the chances that deepfakes are discovered and blocked before they can deceive the public. More broadly, these safeguards could make it more difficult for bad actors to interfere in our elections.
But this brings us to the accord’s greatest weakness. As several commentators have noted, the commitments are both somewhat vague and entirely voluntary. There is no mechanism in the accord by which companies are required to report on their progress. Nor are there any suggested benchmarks for others to track their progress.
Indeed, many of the pledges in the accord are the same or quite similar to the promises that half of these companies made to the Biden administration last summer. Today, who can judge how much they have done since then to live up to those commitments, given their lack of reporting? And over the next few months, how do we judge not just whether AI impacts our elections, but whether this accord was anything more than PR window-dressing?
Among other things, we would like to see each of the companies release (at a minimum) monthly reports that detail:
- Any new policies they have introduced about AI and elections, with regular updates as to what actions are being taken under these policies and how they are ensuring these new policies apply to the use of their products by third parties.
- Their investment in identifying AI-generated materials and their incorporation of new technologies into their products related to watermarking, content provenance, or labeling. Reporting should include the number of employees working full time on this issue and the monetary resources allocated.
- The results of risk assessments performed around the use of AI models developed by the companies as they relate to deceptive AI content, with a specific focus (when relevant) on risks from unsecured “open-source” AI systems.
- How much AI-generated content has been removed, how frequently users are blocked from creating election-related content, and under what policies.
- Any AI systems, software, or hardware found to be used by state actors (including China, Iran, North Korea, and Russia) or digital mercenaries with a track record of interfering in elections.
- How many human content moderators, threat investigators, or other trust and safety or integrity staff are working for the company, with details on how many of these individuals are fluent in the languages spoken in all countries holding elections this year.
- The steps taken toward and money invested in fostering public awareness and building societal resilience to deceptive AI content.
- How the company has engaged with civil society organizations, academics, and other subject-matter experts to support their efforts on AI and elections.
In theory, this kind of reporting should be happening under the accord, with its commitment to “provide transparency to the public.” But its lack of specificity means we really have no idea what its signers will do. We urge the companies to embrace the kind of transparency and accountability the accord promises by adhering to detailed, regular reporting.
Collectively, they have unleashed upon the world a set of tools and technologies that threaten, in their own words, to “jeopardize” our democratic systems — and done so to enormous profits. At this point, the democracies of the world who may pay the biggest price need more than promises. We need accountability.