Skip Navigation
Archive

EAC Voting Advocate Roundtable

Responses to discussion questions during voting advocate roundtable discussion.

Published: April 24, 2008

Download testimony [pdf]

United States Election Assistance Commission
Voting Advocate Roundtable Discussion

  Written Comments of
Lawrence D. Norden

Brennan Center for Justice at New York University School of Law

April 24, 2008

The Brennan Center thanks the EAC for holding this roundtable.  We appreciate the opportunity to share our thoughts regarding the next iteration of the Voluntary Voting System Guidelines (the “VVSG”).

In just five years, the vast majority of Americans have gone from using punch card and lever machines, to having their votes counted by electronic touch screens and optical scanners.[1]  Unfortunately, as the Brennan Center and others have noted, this massive change took place without adequate development and implementation of procedures necessary to ensure that our new electronic voting systems were as secure and reliable as possible.  In retrospect, the result of this failure was all too obvious: a crisis in public confidence in the voting systems most widely used across our nation and the certification and use of voting systems with serious security, accuracy and reliability flaws.

Fortunately, there is widespread agreement among experts who have studied voting system security about what must be done to make electronic voting systems more secure and reliable.  The newest draft VVSG takes significant strides in this respect, and represents a major improvement over previous iterations.  In particular, its direction for Software Independence and Independent Voter-Verifiable Records, as well as its inclusion of Open Ended Vulnerability Testing and usability benchmarks, are essential advances for ensuring secure and reliable elections.

RESPONSES TO DISCUSSION QUESTIONS

1. On October 7, 2005 the National Institute of Standards and Technology (NIST) held a “Risk Assessment Workshop” in order to evaluate threats to voting systems.  The results of that workshop can be found at http://vote.nist.gov/threats/. In so doing NIST recognized the importance of evaluating threats when developing a secure voting system, but no formal risk assessment was developed. The EAC is now interested in learning how to best develop a risk assessment framework to provide context for evaluating the security implications of using various technologies in voting systems?

The Brennan Center applauds the EAC’s interest in developing a risk assessment framework.  In 2005, in response to growing public concern over the security of electronic voting systems, the Brennan Center assembled a task force (the “Security Task Force”) of the nation’s leading technologists, election experts, and security professionals to conduct a risk assessment of the nation’s electronic voting machines.[2]  The goal of the Security Task Force was simple: to quantify and prioritize the greatest threats against voting systems to the outcome of a statewide election, and to identify steps that we can take to minimize those threats.

Working with election officials and other experts for nearly eighteen months, the Security Task Force analyzed the nation’s major electronic voting systems (using much of the data collected through NIST’s Risk Assessment Workshop), ultimately issuing The Machinery of Democracy: Protecting Elections in an Electronic World (the “Brennan Center Security Report”) in June 2006.[3]

a.      What are the essential elements of a risk assessment?

First, there must be agreement on what type of risks are being assessed: malice or mistake, the risk of changing the results of an election, the risk of shaking public confidence in the outcome of an election, the risk that inaccuracies will go undetected, the risk that inaccuracies cannot be corrected, etc.  Hopefully, a risk assessment will be multi-faceted and look at many possible risks and threats.

Second, there must be agreement on the metric used to approximate the difficulty of successfully executing an attack, or the likelihood that a mistake will occur.  There is no perfect metric for either of these questions.  Each potential attack against a voting system requires a different mix of resources: well-placed insiders, money, programming skills, security expertise, etc.  Different attackers would find different resources easier to acquire than others.  In determining the metric for quantifying the difficulty of an attack, the Brennan Center’s Security Task Force ultimately settled on the number of “informed participants” necessary to successfully execute an attack.[4]  A risk assessment might also consider the number of “insiders” (with access to election hardware and/or software) and “outsiders” that might be needed to carry out an attack.

For risks associated with mistake or accident, there is a significant historical record of flaws in our current voting systems that have the potential to affect the accuracy of our elections.  In a report jointly authored with the Samuelson Law, Technology & Public Policy Clinic at the University of California, Berkeley School of Law (Boalt Hall), the Brennan Center provided examples of dozens of inaccurate electronic vote tallies and machine output errors caused by software bugs, programming mistakes, and other failures on Election Day.[5]  The lesson from this report was clear: regular audits of software independent, voter verified records of voters’ selectons are critical to detecting inaccuracies and correcting them.[6]

b.      How can the EAC best create a risk assessment that recognizes all possible risks and assesses the plausibility and nature of such risks?

c.       How do you evaluate what is an allowable level of risk?

It is essential that in using a risk assessment, the EAC recognize that all risks are not equal, and the likelihood of a particular threat must be weighed against its potential impact.[7]  The Brennan Center’s Security Task Force looked at risks likely to have the highest negative impact on the integrity of our electoral process: an undetectable change in the outcome of a statewide election.  The Security Task Force determined that for this high impact outcome, attacks involving the insertion of corrupt software were least difficult, and therefore of most concern.  There is general agreement in the security community that even low-risk vulnerabilities must be addressed if their potential impact is especially high.[8]

More generally, it is probably not possible for the EAC to identify all possible risks or fully assess the plausibility and nature of those risks.  A good risk assessment should supplement certain minimum security standards, not serve as a substitute for them.  Most importantly, we know from previous elections that no matter how good our security procedures are, software and hardware breakdowns and human error will always result in some voting system failures on Election Day.  At the very least, failed voting systems must allow us to recover from breakdowns and collect information that will allow us to both understand the nature and magnitude of the failures, and prevent their recurrence in the future.

2. How can innovative systems be evaluated for purposes of certification?

a.      How can we create a certification process for innovative systems that isn’t a backdoor around the standard certification process but at the same time isn’t so cost prohibitive and restrictive that it presents a barrier and a disincentive to prospective inventors and manufacturers?

b.      Can a set of limited standards be created in order to make the path towards certification of innovative systems more clear?  If so, how?

The Brennan Center supports the development of innovative systems.  To be optimally effective, the VVSG should clarify what qualifies as an innovation class submission.  A more clear definition could be: a system that performs or supports voting activities and does so in a manner not contemplated by the guidelines.

Innovative systems will undoubtedly present new and challenging problems.  Ideally, the innovation class would adopt decision publication procedures similar to those found in the EAC’s Voting System Testing and Certification Program Manual concerning certification decisions and VVSG interpretations.[9]  The EAC could be helped by assembling a committee of experts from diverse fields to develop a protocol for testing and certifying these machines.

3. What is the value of the open-ended vulnerability testing model?

a.      Are there any risks associated with this kind of testing?

b.      What are the best ways to limit the cost of this kind of open ended non-scripted testing so that it can be useable within the EAC’s testing program?

Currently, voting systems are certified by laboratories through “conformance” testing, which is meant to ensure that the voting system being tested will respond in a way proscribed by the federal voting system guidelines under normal conditions.  Computer scientists and security experts agree that conformance testing is not sufficient to ensure that our systems are secure.  As Professor David Wagner has pointed out in testimony before Congress, security evaluations should assume “an active, intelligent adversary; [conformance testing] concerns the presence of desired behavior, while security concerns the absence of undesired behavior.”[10]

Princeton Professor Ed Felten’s demonstration of a serious security flaw in a certified voting machine is an excellent example of the weakness of relying on conformance testing for security evaluations.  Professor Felten and his co-authors showed that it was possible to insert malicious software onto a voting machine through the use of the machine’s memory card slot.  This flaw could allow a person with just a few seconds access to the memory card slot to “modify all of the records, audit logs, and counters kept by the voting machine.”[11]  While the flaw may have violated provisions of the voting system guidelines, these provisions were vague enough that it is easy to understand how lax testing could have missed it;[12] there was nothing in the guidelines that specifically prohibited a voting machine from being able to download code from a memory card or through a memory card slot.

It is not reasonable to expect that we can develop a “checklist” that will imagine every possible flaw in a voting system.  Clearly, however, finding such flaws before certifying machines is extremely important.

There are at least two important ways to address concerns around the limits of conformance testing.  First, vendors should be required to demonstrate how their machines will defeat a standard set of threats.  These threats could be developed by NIST and/or taken from the risk assessment developed by the EAC.  Under no circumstances should software be the only defense against such attacks.[13]

Second, we should require Open Ended Vulnerability Testing (“OEVT”) (these tests are often referred to as “red team exercises”).[14]  This is how many of the most serious vulnerabilities in electronic voting systems have been found.[15]  Unfortunately, to this point, such flaws have been found outside the certification process, after machines were already certified and used in elections.

The draft VVSG does a good job of limiting the cost of OEVT so that it can be usable within the EAC’s testing program by providing specific requirements, restrictions, and fail criteria.  Of course, it will be necessary to balance greater system quality that can be accomplished through more extensive OEVT against potential expense and delay associated with such testing.

c.       If the EAC were to require OEVT how could it best be included into the EAC’s Testing and Certification Program?

Vendors should be required to fix any flaws found in voting systems as a result of OEVT, and should be required to notify all jurisdictions using their systems of the flaws discovered.

4. Do methodologies exist to test voting system software so it can be reliably demonstrated to operate correctly?

a.      If testing to a thorough set of standards is not enough to demonstrate the reliability of the system, what else can be done to improve confidence in electronic voting systems?

While our methods for testing voting system software can certainly be improved (and the current standards in the draft VVSG make important strides in this direction), it is not possible to know with certainty that a voting system will operate correctly.  Given the amount of software, points of vulnerability, and number of persons involved in producing machines and administering and participating in elections, even the best testing and certification process will leave room for machine failures and vulnerability to attack.

There are several steps that can be taken to improve public confidence in electronic voting systems:

  • Conduct regular post-election audits comparing software independent voter-verified records to electronic tallies, to ensure that those tallies are accurate.  Regular post-election audits would reduce our reliance on testing labs to ferret out security and reliability problems in the software and assure the public that their votes were recorded and counted accurately.
  • Ensure that voting system testing laboratories are, and appear to be, independent of vendors.  Recent events have left many questioning the independence and competence of the laboratories that test and certify electronic voting systems.  There are at least two things that can be done to begin to change this perception and create truly independent labs.  First, we should end the process whereby the voting system testing laboratories are chosen and directly paid by the vendors whose machines they evaluate.  This creates an appearance of a conflict of interest.  Worse yet, it creates perverse incentives for the testing laboratories when testing vendors’ machines.  Second, the periodic evaluations of testing laboratories conducted by the National Voluntary Laboratory Accreditation Program (“NVLAP”) should be made public promptly, regardless of whether the laboratory’s accreditation is granted, denied or revoked.
  • Incorporate closed feedback loops into the regulatory process, amending voting system standards, where necessary.  The EAC’s Voting System Testing and Certification Program Manual now provides a formal (though severely limited) process by which election officials may report voting system anomalies.  The Brennan Center joins other organizations in recommending that this reporting process be opened to include reporting from voters and technical experts who find anomalies.[16]  Standards should be informed by experience.  As David Wagner has noted in testimony to Congress, “when an airplane crashes, federal crash investigators descend upon the scene to learn what went wrong so we can learn from our failures and ensure it won’t happen again.”[17]  Voting system failures should be treated in the same way.

5. Throughout the creation of its draft VVSG, the EAC’s Technical Guidelines Development Committee struggled to balance the need for useable and accessible systems with the desire to create the most secure system possible.

a.      How can the EAC best strike a balance between these sometimes competing needs?

b.      What level of usability or accessibility could be sacrificed in order to gain additional security or vice versa?

Any voting system used in the United States should allow jurisdictions to provide a minimum level of security, reliability, accuracy, usability and accessibility to their voters.  For issues of security, reliability and accuracy, that must mean, at the very least, the ability to identify and correct anomalies without depending on software.

For issues of usability and accessibility, it must mean creating systems that allow election officials to (a) conduct adequate usability and accessibility tests, using members of the local community (including persons with a variety of disabilities) and (b) make adjustments based on the findings of those tests to ensure that, to the greatest extent possible, all voters are able to cast valid ballots that accurately reflect their intended selections without undue delays or burdens, and are able to do so privately and independently.

6. Are there any changes to the VVSG, in either scope or depth, which would significantly reduce the cost (time and/or expense) of compliance without adversely affecting the integrity of the VVSG or the systems that are derived from its implementation?

a.      What needs to be added or removed from this document to strengthen it and the systems to be constructed to its specification?

b.      How could the process of developing and vetting the VVSG be improved to ensure higher volume and higher quality input from election official

The greatest concern for election officials about the next iteration of the VVSG should be how its new requirements will affect the operations of their elections.  Better involvement of election officials in reviewing the VVSG would be possible if an organization like NIST or NASED put together an operational checklist that allowed election officials to understand whether and how each technical specification would affect election operations.

The next iteration of the VVSG could be strengthened in several ways:

  • Add requirements for the auditability of voting systems.  As already discussed, post-election audits of software independent voter-verified records are critical to ensuring both public confidence in and the integrity of our elections.  Unfortunately, the current draft VVSG does not contain any requirements that would ensure election officials can straight-forwardly audit these records.
  • Incorporate incident reporting and feedback loops into the certification process.  As previously discussed in these comments, one of the best ways to improve systems is to ensure that failures are reported and subsequent action taken.  Incident reporting should be part of a feedback loop that is part of the certification process.  If voting systems have demonstrated problems in the field that would otherwise violated requirements in the VVSG, they should not pass federal certification until such problems have been fixed.
  • Require usability and accessibility testing that use a cross-section of voters.  These tests should take place in environments that closely resemble polling places on Election Day.  They should examine each step a voter must perform, starting with ballot marking and ending with ballot submission; take into account a cross-section of the population, including persons with a full range of disabilities, in order to ensure that accessible features are usable by people with disabilities; and use full ballots that reflect the complexity of a real election.

APPENDIX A: Brennan Center Security Task Force

Chair
Lawrence D. Norden, Brennan Center for Justice

Principal Investigator
Eric L. Lazarus, DecisionSmith

Experts
Georgette Asherman, independent statistical consultant, founder of Direct Effects

Professor Matt Bishop, University of California at Davis

Lillie Coney, Electronic Privacy Information Center

Professor David Dill, Stanford University

Jeremy Epstein, PhD, Cyber Defense Agency LLC

Harri Hursti, independent consultant, former CEO of F-Secure PLC

Dr. David Jefferson, Lawrence Livermore National Laboratory and Chair of the California Secretary of State’s Voting Systems Technology Assessment and Advisory Board

Professor Douglas W. Jones, University of Iowa

John Kelsey, PhD, NIST

Rene Peralta, PhD, NIST

Professor Ronald Rivest, MIT

Howard A. Schmidt, former Chief Security Officer, Microsoft and eBay

Dr. Bruce Schneier, Counterpane Internet Security

Joshua Tauber, PhD, formerly of the Computer Science and Artificial Intelligence Laboratory at MIT

Professor David Wagner, University of California at Berkeley

Professor Dan Wallach, Rice University

Matthew Zimmerman, Electronic Frontier Foundation



[1] Election Data Services, 2006 Voting Equipment Survey, available at http://www.electiondataservices.com/EDSInc_VEStudy2006.pdf.

[2] For a list of the members of the Security Task Force see Appendix A of this Statement.

[3] Lawrence Norden et al., The Machinery of Democracy: Protecting Elections in an Electronic World (Brennan Center for Justice ed., 2006), available at http://www.brennancenter.org/content/resource/machinery_of_democracy_protecting_elections_in_an_electronic_world/.

[4] Id. at 9.

[5] Norden et al., Post-Election Audits: Restoring Trust in Elections app. A (Brennan Center for Justice ed., 2007), available at http://www.brennancenter.org/content/resource/post_election_audits_restoring_trust_in_elections/.

[6] Another essential element of risk assessment is to make certain assumptions about what security procedures vendors and political jurisdictions will put in place.  This is no easy task, as security procedures (in writing as well as in practice) vary widely from vendor to vendor and jurisdiction to jurisdiction.

[7] National Institute of Standards & Technology, Standards for Security Categorization of Federal Information and Information Systems (Feb. 2004), available at http://csrc.nist.gov/publications/fips/fips199/FIPS-PUB-199-final.pdf.

[8] Id.

[9] U.S. Election Assistance Commission, Testing and Certification Program Manual secs. 5.13, 9 (  available at http://www.eac.gov/voting%20systems/docs/testingandcertmanual.pdf/attachment_download/file.

[10] Voting Machines: Will the New Standards & Guidelines Help Prevent Future Problems?: Joint Hearing Before the H. Comm. on H. Admin. & the Comm. on Science, 109th Cong. 136–148 (2006) (Testimony of David Wagner, Professor of Computer Science, University of California-Berkeley), available at http://www.votetrustusa.org/index.php?option=com_content&task=view&id=1554&Itemid=26.

[11] Ariel J. Feldman, J. Alex Halderman, & Edward W. Felten, Security Analysis of the Diebold AccuVote-TS Voting Machine 2 (Sept. 13, 2006) available at http://itpolicy.princeton.edu/voting/ts-paper.pdf.

[12] In his testimony Professor David Wagner notes that this security vulnerability may have violated Sections 6.4.2 and 6.2 of the FEC Standards.  Certification and Testing of Electronic Voting Systems: Field Hearing in New York, NY Before the Subcomm. on Info. Policy, Census, and Nat’l Archives of the H. Comm. on Oversight and Gov’t Reform, 110th Cong. 12 n.22 (2007) (Written Testimony of David Wagner, Associate Professor of Computer Science, University of California-Berkeley) [hereinafter “Wagner Testimony”].

[13] See National Institute of Standards and Technology, Requiring Software Independence in VVSG 2007: STS Recommendations for the TGDC (draft, Nov. 2006) available at http://vote.nist.gov/DraftWhitePaperOnSIinVVSG2007–20061120.pdf (recommending that future systems be “software independent,” meaning that an “undetected change in software cannot cause an undetectable change or outcome in an election.”).

[14] U.S. Election Assistance Commission Public Meeting and Hearing, Pasadena, CA (July 28, 2005) (Testimony of David L. Dill, Professor of Computer Science, Stanford University and Founder of Verified Voting Foundation and VerifiedVoting.org).

[15] See, e.g., Michael A. Wertheimer, RABA Technologies LLC, Trusted Agent Report: Diebold AccuVote-TS Voting System (Jan. 20, 2004) available at http://www.raba.com/press/TA_Report_AccuVote.pdf; Harri Hursti, Security Alert: July 4, 2005 – Critical Security Issues with Diebold Optical Scan Design (on behalf of Black Box Voting, July 5, 2005) available at http://www.blackboxvoting.org/BBVreport.pdf; Feldman, Halderman, & Felten, supra note 11.

[16] ACCURATE, Public Comment on the Manual for Voting System Testing & Certification Program Submitted to the United States Election Assistance Commission 8 (Oct. 31, 2006), joined by the Brennan Center, available at http://accurate-voting.org/wp-content/uploads/2006/11/ACCURATE_VSTCP_comment.pdf.

[17] Wagner Testimony, supra note 12.