Skip Navigation
Expert Brief

Safeguards for Using Artificial Intelligence in Election Administration

Adequate transparency and oversight can ensure AI tools in election offices are helpful and not harmful.

AI and Democracy
Chris Burnett
View the entire AI and Democracy series

As artificial intelligence tools become cheaper and more widely available, government agencies and private companies are rapidly deploying them to perform basic functions and increase productivity. Indeed, by one estimate, global spending on artificial intelligence, including software, hardware, and services, will reach $154 billion this year, and more than double that by 2026. As in other government and private-sector offices, election officials around the country already use AI to perform important but limited functions effectively. Most election offices, facing budget and staff constraints, will undoubtedly face substantial pressure to expand the use of AI to improve efficiency and service to voters, particularly as the rest of the world adopts this technology more widely.

In the course of writing this resource, we spoke with several election officials who are currently using or considering how to integrate AI into their work. While a number of election officials were excited about the ways in which new AI capabilities could improve the functioning of their offices, most expressed concern that they didn’t have the proper tools to determine whether and how to incorporate these new technologies safely. They have good reason to worry. Countless examples of faulty AI deployment in recent years illustrate how AI systems can exacerbate bias“hallucinate” false information, and otherwise make mistakes that human supervisors fail to notice.

Any office that works with AI should ensure that it does so with appropriate attention to quality, transparency, and consistency. These standards are especially vital for election offices, where accuracy and public trust are essential to preserving the health of our democracy and protecting the right to vote. In this resource, we examine how AI is already being used in election offices and how that use could evolve as the technology advances and becomes more widely available. We also offer election officials a set of preliminary recommendations for implementing safeguards for any deployed or planned AI systems ahead of the 2024 vote. A checklist summarizing these recommendations appears at the end of this resource. 

As AI adoption expands across the election administration space, federal and state governments must develop certification standards and monitoring regimes for its use both in election offices and by vendors. President Joe Biden’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence marks a pivotal first step, as it requires federal regulators to develop guidelines for AI use by critical infrastructure owners and operators (a designation that has included owners of election infrastructure since 2017) by late spring 2024.

Under its recently announced artificial intelligence roadmap, CISA will provide guidance for secure and resilient AI development and deployment, alongside recommendations for mitigating AI-enabled threats to critical infrastructure. But this is only a start. It remains unclear how far the development of these guidelines will go and what election systems they will cover. The recommendations in this resource are meant to assist election officials as they determine whether and how to integrate and use AI in election administration, whether before or after new federal guidelines are published next year.

Current and Potential Future Uses of AI in Election Administration

Artificial intelligence is an umbrella term for computer systems that use data, algorithms, and computing power to perform a range of tasks that historically required human intelligence, such as recognizing speech, identifying patterns in data, and making predictions. Today, AI tools make movie recommendationspower facial recognition, and even drive cars. Generative models — a subset of AI capable of producing realistic text, images, video, and audio in response to user prompts — have garnered widespread public attention since the release of ChatGPT in 2022. Although both generative and non-generative AI can behave unpredictably in new situations, understanding (and predicting) the former’s at times unexpected behavior is often more difficult, mainly because generative AI is typically built using more parameters and vastly more data. Election officials need to have safeguards in place when using both generative and non-generative AI. 

Organizations in the private and public sectors frequently use AI for data management functions such as identifying duplicate records, and election offices are no exception. The Electronic Registration Information Center (ERIC), a multistate voter list maintenance effort, is one example of non-generative AI use in election administration. ERIC’s software employs AI to support voter roll management by searching for duplicate entries across many data sets. ERIC validates possible matches by conducting a human review prior to sending matching data to member states. Once states receive data from ERIC about possible matches, they process them according to their respective state list maintenance rules per requirements established by the National Voter Registration Act. 

ERIC’s ability to identify potential matches using various data sets is considerably more advanced than earlier systems, such as the Interstate Voter Registration Crosscheck program, which utilized rudimentary data matching with limited date fields and led to high numbers of false positive identifications. In a typical example, a false positive would incorrectly identify two distinct voters on different voter rolls (or other databases, like the Social Security Administration’s Limited Access Death Master File) as being the same person. False positives increase the workload and cost associated with list maintenance processes. More importantly, they can harm eligible voters and lead to disenfranchisement.

Election offices also use non-generative AI to match mail-in ballot signatures, historically a time- and labor-intensive task. Although the specific technology varies by vendor, ballots are generally fed through a scanner that captures an image of the signature and compares it with a signature already on file. Signatures that the software can match are processed for counting; signatures that cannot immediately be verified are set aside for human review and further analysis. This automation allows election offices to focus on researching and validating a smaller set of signatures before processing ballots for counting, thereby saving time and resources. Election offices should account for potential bias by incorporating examples during training of signatures the matching software is more likely to not validate, such as first-time or elderly voters, and include suggestions for appropriate human review of those ballots.

An increasing number of election offices are using AI chatbots to answer basic voter questions as well. (Chatbots can be either generative or non-generative, though we are not aware of any election offices yet exploring generative AI for this purpose.) Chatbots like those used by the New York State Department of Motor Vehicles and the California secretary of state can provide information outside of normal office hours and free up staff to deal with more complex issues. This technology also helps voters navigate election websites, providing important information like polling place times and locations and answers to frequently asked questions.

Non-generative AI chatbots typically produce pre-vetted responses or use a form of natural language processing similar to Amazon’s Alexa virtual assistant technology. While both non-generative and generative AI chatbots risk unreliable or biased results, the latter would entail the added burden of verifying the accuracy of synthesized content. Any generative AI chatbot that an election office seeks to employ would require sufficient development and testing time to ensure, for example, that it accurately answers voter questions but appropriately redirects questions complicated or consequential enough to need staff attention.

In addition, some election offices are considering using AI to help create and translate voting materials. Here too, election officials would need to mitigate AI’s potential to provide incorrect or unreliable information. At the very least, they would need to implement strong internal controls to ensure that all products are reviewed by the appropriate staff before release and corrected where needed — particularly when AI tools assist in translating election materials. This level of scrutiny is crucial given the nuance related to voter registration and voting requirements.

As the technology evolves, election officials will likely find myriad new ways for AI to assist in election administration. AI could serve as an extra proofreader for election materials and an extra set of “eyes” to ensure that ballots and other materials comply with the law and best design practices, or that materials are correctly translated. AI could also be used to identify new polling place locations based on traffic patterns, travel time for voters assigned to a polling place, public transportation routes, parking availability, and other factors. AI systems could even be used to analyze postelection data to improve future elections, identifying patterns in provisional voting, voter registration application rejections, and reasons for rejecting absentee or mail-in ballots. The possibilities are innumerable. Yet all of them will require similar quality-control standards and safeguards.

Risks Associated with AI Use in Elections

The risks associated with integrating AI into election processes are considerable. Some of them are inherent to AI technology, while others arise from human-machine interactions. A particular risk lies in the inevitable differences between an AI system’s training data and the data it uses when deployed. As a result of this data disparity, AI typically performs worse in operation than on the benchmarks or performance results obtained during testing and presented by vendors.

As other resources in this series have explored, AI trained using past data and past decisions also risks perpetuating biases inculcated in those decisions. This all-too-common phenomenon can systematically disenfranchise groups of voters if the historical bias is not mitigated during an AI tool’s development and implementation. Furthermore, generative AI chatbots can suffer from “hallucinations” — delivering incorrect information presented as fact — which risks providing voters with wrong information. Spotting incorrect or hallucinated information is difficult in contexts where election office staff cannot oversee the chatbot’s responses, such as real-time interactions with voters. As such, using generative AI for election administration functions is often high risk.

Election office staff should review AI tools’ decisions and outputs whether those systems use generative or non-generative AI. Such reviews will require training to mitigate automation bias (the tendency to over-rely on automated decisions because they appear objective and accurate) and confirmation bias (the predisposition to favor information when it confirms existing beliefs or values). Insufficient transparency about where and how AI tools are used in processes that affect voters’ ability to cast their ballots compounds these risks by preventing external scrutiny. Internal system evaluations are generally protected information, and independent external analyses are often impossible because election offices cannot share data with third parties.

Two other concerns are worth mentioning, although their discussion and mitigation are beyond the scope of this resource. First, as an earlier installment in this series discusses in more detail, election officials will need to take steps to prevent attacks against AI systems integrated into election administration. Second, the use of AI in certain contexts has implications for constituents’ privacy. Using voters’ data — usually without their knowledge or consent — to train or fine-tune AI tools provided by third parties raises serious concerns around data protection, ownership, and control, especially when records contain sensitive information like names, birth dates, addresses, and signatures. Moreover, some AI uses touch on the principle of anonymity in elections. The use of biometric data for voter registration or identity verification creates a log of identified voter behavior that could threaten voter anonymity if paired with local ballot counts or similar data.

The absence of regulations or governmental guidance on safe AI implementation amplifies these risks. As resource-constrained election officials look to the benefits of AI, they must also assess the risks and potential downsides of adopting these new technologies. In doing so, they must recognize that incorporating AI tools in election administration without appropriate risk mitigation measures and transparency could compromise voter confidence heading into the 2024 election cycle and beyond.

Recommendation for Election Offices: AI CPR (Choose, Plan, and Review)

In deciding whether to employ AI, election officials should implement and follow a transparent selection process to choose the specific AI tool for any given election administration task. If and when they do choose a particular AI system, officials need to carefully plan that system’s integration into their workflows and processes. Part of that planning must include identifying and preparing for problems that may surface as the system is incorporated. They must also be able to shift resources as needed. Finally, they must establish thorough review procedures to ensure that the output of any AI tool deployed in an election office is assessed by staff for accuracy, quality, transparency, and consistency. Below, we describe important considerations at each of these three stages.

Choose AI Systems with Caution 

Opt for the Simplest Choice

In choosing any system (AI-based or not) for use in election administration, all else being equal, we recommend that election officials choose the simplest tool possible. When it comes to AI, though simpler AI algorithms may be less refined than more complex ones, they are also easier to understand and explain, and they allow for greater transparency. Should questions or anomalies arise, determining answers and solutions will be easier with a simple AI model than an elaborate one. The most complicated AI systems currently available belong to the latest class of generative AI, followed by non-generative neural networks and other deep learning algorithms. Basic machine learning models like clustering algorithms or decision trees are among the simplest AI tools available today.

A useful practice to facilitate choosing the simplest possible system is for election officials to narrowly define the tasks that the AI will perform and identify a list of requirements. Requirements can range from price considerations or necessary IT and data infrastructure to the need for additional functionalities or minimum performance levels reflecting the risk level that election officials are willing to accept for a given task. Establishing these parameters ahead of the selection process will help both to ensure transparency around the criteria used for assessing proposals and to prevent “scope creep” when vendors demonstrate capabilities of more advanced systems.

Plan for Human Involvement 

If an AI tool could result in someone being removed from the voter rolls, being denied the ability to cast a ballot, or not having their vote counted, then election officials should choose a system that requires human involvement in making final decisions. Human involvement helps to safeguard against AI performance irregularities and bias. Most jurisdictions have processes that require additional review before rejecting vote-by-mail or absentee ballots for a non-signature match. Generally, this review involves bipartisan teams that must reach a consensus before rejecting a ballot. Twenty-four states currently have processes in place that require election offices to notify voters should questions arise about their signature and to provide them the opportunity to respond and cure the issue. Such processes are vital to ensure that AI systems do not inadvertently prevent voters from having their votes counted. The planning and review stages outlined below will need to factor in this human involvement. 

Anticipate Performance Disparities, Reliability Issues, and Output Variability 

When selecting an AI tool, election officials should assume that the system will not perform as effectively as vendor metrics claim. In developing and training AI models, vendors inevitably use a training data set that is different than the data the AI is fed during actual use. This scenario frequently leads to degraded performance in real-world applications. Additionally, because of data eccentricities, data collection processes, and population differences, the same AI tool’s performance can vary substantially between districts and between population groups within the same district. As a result, AI tools are likely to perform less effectively on actual constituents’ data compared with benchmarks or results presented by vendors.

In particular, name-matching algorithm performance has been shown to vary across racial groups, with the lowest accuracy found among Asian names. A study of voter list maintenance errors in Wisconsin also revealed that members of minority groups, especially Hispanic and Black people, were more than twice as likely to be inaccurately flagged as potentially ineligible to vote than white people. Similarly, AI-powered signature-matching achieves between 74 and 96 percent accuracy in controlled conditions, whereas in practice, ballots from young and first-time mail-in voters, elderly voters, voters with disabilities, and nonwhite voters are more likely to be rejected. Unrepresentative training data coupled with low-quality signature images, often captured using DMV signature pads, to match against lowers the effectiveness of signature-matching software.

Implementing this technology for voter roll management thus raises major concerns. One mitigation strategy that election officials can utilize in choosing AI systems is to require vendors to use a data set provided by the election office for any demonstrations during the request for proposal and contracting process. This approach can provide further insight into system performance. Importantly, election officials should ensure that only publicly available data is used or that potential vendors are required to destroy the data after the selection process has concluded and not retain or share the data for other purposes.

Although a general strength of generative AI is its ability to respond to unanticipated or unusual requests or questions, election officials must bear in mind that current generative AI tools often suffer from reliability issues. Generative AI chatbots may produce different responses to the same request, and they regularly produce incorrect or hallucinated replies. In addition, the underlying language models are frequently fine-tuned and updated, which in turn affects the behavior of systems built on them.

Finally, when deciding whether to use a generative tool, election officials must consider whether variations in content and quality are acceptable. For most election-related tasks, variability that could result in an office propagating misinformation is not a tolerable outcome. As such, election offices should not adopt generative AI systems for critical functions without national or state standards in place to guide appropriate uses and provide baseline assurances of system reliability and safety.

Plan for AI Use — and for Potential Problems

Election offices should devise both internally and externally focused implementation plans for any AI system they seek to incorporate. Internally, election officials should consider staffing and training needs, prepare process and workflow reorganizations, and assign oversight responsibilities. Externally, they should inform constituents about the AI’s purpose and functionality and connect with other offices employing the same tool. Most importantly, officials should develop contingency plans to handle potential failures in deployed systems.

Develop Staff Training

Before deploying an AI tool, election officials must consider the training needs of their staff. While the following list is not all-inclusive, training should impart a high-level grasp of the AI system. Staff must understand the exact tasks the AI performs, its step-by-step processes, the underlying data utilized, and its expected performance. For instance, rather than thinking of a signature verification system simplistically as a time-saving bot that can verify mail-in ballots, staff should see it as a software tool that attempts to match the signature on a ballot to an image on record using a computer vision algorithm, and that it does so with an average accuracy rate of 85 percent. 

At a minimum, staff training should cover

  • familiarization with the user interface;
  • common risks and issues associated with data and AI (such as those described above), how they could occur in the context of the office’s constituency and its election administration work, the system’s limitations, and how to address problems;
  • internal processes for flagging issues with the AI and accountability guidelines in case of failure or errors; and
  • requirements for — and the importance of — human involvement in decisions that directly implicate voter rolls, vote casting, and vote counting, including techniques for mitigating bias.

Prioritize Transparency

Constituents have a right to know about AI systems involved in election administration. Election officials must be transparent about when, for what, and how AI tools will be used. Before deployment, election offices should work with the AI developers to prepare and publish documentation in nontechnical language. These documents should describe the system’s functionality and how it will be used, what is known about its performance, limitations, and issues, and any measures taken to mitigate risk for the particular election administration task for which it will be deployed. Constituents should have opportunities to discuss questions and concerns with officials to build trust in the technology and in election administrators’ oversight capabilities. The need for transparency and documentation should be outlined in the request for proposal process and included in vendor contracts so that relevant information cannot be hidden from public view under the guise of proprietary information. 

Prepare Contingency Plans

Election officials must have contingency plans in place before incorporating AI technology. AI contingency plans must include appropriate preparations to manage any potential failures in a deployed AI system. First and foremost, election offices must be able to disable an AI tool without impairing any election process — a fundamental best practice for using AI in a safe and trustworthy manner. AI tools should not be integrated into election processes in a way that makes it impossible to remove them if necessary.

Contingency plans must identify the conditions under which an AI tool will be turned off along with which staff members are authorized to make such a determination. Election offices must ensure that staff are aware of these conditions and are trained to identify them and to report issues, flaws, and problems to the responsible officials. Offices must also have a strategy in place for how to proceed if the use of AI is halted. This strategy should include identifying additional personnel or other resources that can be redirected to carry out certain tasks to ensure their timely completion.

Seek Other Users’ Input

The experiences of other users can help inform election offices newly adopting AI tools. Election officials should ask potential vendors for lists of other offices currently using their systems during the request for proposal process and should reach out to those offices when evaluating bids. Many AI tools are relatively new, so users are often the ones who discover their strengths and weaknesses. Learning from other users’ experiences in the elections space will be valuable for shaping effective training and implementation and for identifying resource needs and contingencies. 

Review AI Processes and Performance 

System reviews are an essential best practice when using AI tools. The extent and frequency of reviews will vary depending on the gravity of the election administration task at hand and the risk associated with it. Low-risk or low-impact applications (for example, an AI system used to check whether ballots comply with best design practices) may only need a process for getting user or voter feedback and a periodic review of the AI’s performance. However, systems that help decide if someone gets to vote or if a vote is counted need more frequent and direct human oversight.

Institute Straightforward Review Processes

Election officials should establish clear processes for collecting, assessing, and resolving issues identified by both internal and external stakeholders and for reviewing AI system performance. These processes should include soliciting staff and constituent feedback, monitoring use and output logs, tracking issues, and surveying help desk tickets.

Audits of issues and performance should occur before and after elections. Pre-election reviews are paramount to safeguard voting rights and identify if an AI’s contingency plan needs to be implemented. Postelection reviews will help improve future use and should assess all processes that AI touched, including evaluations of performance across demographic groups to reveal any potential biases. These reviews present an opportunity for election officials to work with federal partners on meaningful assessment tools for deployed AI systems, much as federal agency assessment tools exist for reviewing polling place accessibility and election office cybersecurity.

Ensure Human Involvement in Final Decisions That Affect Voters

People are the most critical factor in the successful deployment of AI systems in election offices. Decisions that directly affect an individual’s right to vote and ability to cast a ballot cannot be left solely to AI — trained individuals must be involved in reviewing consequential decisions based on AI analysis and AI-produced information. Regarding AI-assisted translations of election materials, if staff are not fluent in all relevant languages, officials should consider partnering with trusted local community groups to ensure translation accuracy. When incorporating AI technology into election administration processes, officials should also consider that these additional trainings and reviews may add or shift costs to different times in the election calendar.

Establish Challenge and Redress Procedures 

Election officials must provide a process for challenging and reviewing AI-assisted decisions. Voters harmed by decisions made based on AI should be able to appeal and request reviews of those decisions. How these processes should occur will vary from jurisdiction to jurisdiction; existing state and local procedures for review and remedy should be assessed for appropriateness in light of AI-assisted decision-making and amended where necessary. For instance, what if a voter is directed to the wrong polling place by an agency chatbot and forced to cast a provisional ballot as a result? That voter needs a way to make sure that their ballot is counted nonetheless, especially because the action was prompted by inaccurate information provided by the election office. This is to say nothing of errors generated by AI-based signature-matching software, for example, or any number of other conceivable AI errors.

Enacting clear and accessible processes for constituents to challenge AI-driven decisions — processes that initiate a swift human review and an appropriate resolution — is imperative both to provide an added layer of protection to voting rights and to continually evaluate the performance of AI systems employed in election administration.

Conclusion

Increasing AI integration in election administration presents many opportunities for improving the voter experience and increasing the efficiency of election offices, but it also introduces new risks to electoral integrity and to the fundamental democratic principle of free and fair elections. While the capabilities of AI products have grown rapidly over the past few years, many of their inherent problems remain unsolved. AI systems often behave in unreliable ways and perform more effectively for some demographics than for others. Many AI tools are inscrutable in their decision-making processes, preventing meaningful human oversight. These issues, left unchecked, can erode citizens’ basic constitutional right to vote. The AI CPR recommendations laid out in this resource are intended to serve as a road map for mitigating these risks. Election officials seeking to use AI should follow this road map as they adopt this exciting but fraught technology.

Appendix: AI CPR Checklist 

We cannot expect local election offices to create safeguards for the use of AI technology by themselves any more than we can expect them to defend themselves single-handedly against cyberattacks by nation-states. This overview of AI CPR (Choose, Plan, and Review) is a non-exhaustive list of considerations to help election officials determine whether and how to incorporate AI in election administration.

Choose with caution.

  • Choose the simplest systems (including non-AI systems) that meet your needs.
  • If you choose an AI tool that implicates the voter roll, vote casting, or vote counting, choose one that requires human involvement for decisions.
  • Assume that the system will perform worse in operation relative to vendor metrics and decide if that is acceptable for the application at hand.
  • Avoid adopting generative AI systems for critical tasks without national or state standards in place to guide appropriate uses.

Plan for use and problems.

  • Establish processes for transparency (toward constituents and stakeholders alike) around when, for what, and how AI systems will be used.
    • Prepare documentation and publications in nontechnical language.
    • Provide information about the purpose, functionality, and oversight of AI tools.
    • Inform constituents about opportunities to raise issues and contest AI-supported decisions.
  • Have a contingency plan in place before deploying AI technology.
  • Identify the conditions that warrant suspending AI systems and what will happen in their absence.
  • Ensure sufficient training of staff. At a minimum, training should cover
    • a high-level understanding of and familiarity with the AI system;
    • effective and safe use of the AI tool;
    • common risks and issues, along with how they may occur during use;
    • internal processes for flagging issues with the system and accountability guidelines in case of failure or errors; and
    • how staff will make final determinations if an AI tool is used to support decisions that directly implicate voter rolls, vote casting, or vote counting.
  • Ask potential vendors for lists of other election offices currently using systems under consideration. Contact those offices for advice and lessons learned.

Review processes and performance. 

  • Implement review processes and create infrastructure for safe use of AI tools. At a minimum:
    • Collect the information needed for reviewing the performance and integration of AI systems. This information can include staff feedback, use and output logs, constituent feedback, issue tracking, and help desk tickets.
    • Ensure human review of decisions that directly implicate voter rolls, vote casting, and vote counting.
    • Provide means for constituents harmed by AI systems to request that decisions be reviewed and changed if necessary.
    • Gather and resolve issues found both internally and externally.
  • Postelection:
    • Review and refine processes based on lessons learned and constituent feedback.
    • Conduct performance evaluations of processes that incorporated AI systems. In particular, assess whether acceptance or false rejection rates varied for demographic groups.

Heather Frase and Mia Hoffmann are, respectively, a senior fellow and a research fellow at Georgetown’s Center for Security and Emerging Technology. Edgardo Cortés and Lawrence Norden are, respectively, an election security advisor and the senior director of the Brennan Center Elections and Government Program.