The Brennan Center won a significant victory in the fight for government transparency and surveillance oversight last month, when the New York State Supreme Court (the state’s trial court) ordered the NYPD to produce records about testing, development, and use of predictive analytics tools in response to the Brennan Center’s ongoing Article 78 litigation.
The litigation stemmed from a Freedom of Information Law (FOIL) request submitted by the Brennan Center in June 2016 for records about predictive policing. Predictive policing uses computer modeling to try to anticipate crime and to manage when and where police officers are deployed. Algorithms analyze historical data to predict where certain crimes may occur and sometimes even who may be involved in a future crime.
Predictive policing software has been enthusiastically adopted by law enforcement agencies in recent years, championed by proponents as a tool to cut costs, improve police effectiveness, and even potentially improve fairness in policing. Yet most of these tools rely on historical policing data to generate their predictions; in the absence of meaningful oversight and transparency, the software may instead simply recreate and obscure the origins of racially biased policing. Unfortunately, the nature of the tools often frustrates efforts to hold police accountable. Most of the software involves the use of a black-box algorithm, often purchased from a third-party commercial software provider, to generate predictions from historical data. The use of private companies can sometimes introduce additional complications to transparency, introducing questions of trade secret and intellectual property information.
The Brennan Center filed the original FOIL request in the interest of better understanding and informing the public about the use of these systems. The request sought documents including:
- communications with private developers of predictive policing software;
- historical inputs and outputs of the software;
- records on testing and utilization of the software;
- audit logs; and
- policies and procedures governing the use of predictive policing.
If obtained, these records would help evaluate the costs and benefits of predictive policing systems and assess whether the system was subject to sufficient oversight and accountability. Yet the NYPD issued blanket denials of both the initial request and a subsequent appeal, forcing the Brennan Center to file suit in December 2016. After months of good faith negotiations, in which the Brennan Center narrowed its request to exclude the algorithm itself as well as the most recent six month’s worth of inputs and outputs, the NYPD produced some documents with significant redactions, but continued to withhold most of the records requested by the Brennan Center. On August 30, 2017, the court heard oral arguments.
In briefs and at oral argument, the NYPD fought to keep these outstanding documents secret. Their attorney claimed that releasing vendor correspondence would reveal trade secret information and jeopardize future communications with potential vendors, arguing that the NYPD had agreed to keep product testing and performance information strictly confidential. (Actually, the non-disclosure agreements signed between the third-party vendors in this instance and the NYPD only covered NYPD information. There was no countervailing agreement for NYPD to maintain the confidentiality of the vendors.) Additionally, they claimed that disclosure of historical output data and notes about the types and sources of data used could enable individuals to recreate the predictions of the NYPD and somehow game the system.
The Court was unconvinced. Finding that the NYPD had failed to offer any expert testimony to support its claims about trade secrets and security, the Court ordered the NYPD to produce email correspondence with predictive policing software vendors with only limited identifying information redacted; historical output data from the existing predictive policing system up through June 27, 2017; and notes from the NYPD’s Assistant Commissioner of Data Analytics who developed the predictive policing algorithms currently in use. The NYPD was also ordered to produce the summary of results of trials that the department conducted of several vendors’ products; the judge will review those records to determine whether any part of them is covered by a deliberative process privilege, and then presumably order disclosure of the remainder. These materials should offer additional insight into both the existing system and the specifications that the NYPD discussed with vendors previously under consideration.
In addition, the Court ordered the NYPD to expand its search for responsive documents to include the Counterterrorism Bureau, and to submit an affidavit about the results no later than the middle of March. The NYPD searched only three departments for responsive documents and issued a conclusory statement that the search was complete, claiming that those departments were the only ones that could reasonably be expected to have relevant information. The Counterterrorism Bureau was never searched. Yet the NYPD has said publicly that the predictive policing system is housed within the department’s Domain Awareness System, which is itself located in the Counterterrorism Bureau.
Finally, in response to the Brennan Center’s request for policies governing the predictive policing system, the NYPD produced a single privacy policy document from 2009. This 8-year-old policy, the Public Security Privacy Guidelines, governs the use of the Domain Awareness System, which encompasses “technology deployed in public spaces as part of the counterterrorism program of the NYPD’s Counterterrorism Bureau.” The policy by its terms has little application to predictive policing, which is not a “technology deployed in public spaces” and purports to be a tool for strategic decision making in policing rather than a mechanism for counterterrorism, indicating that the NYPD has no policy in place that explicitly governs the use of predictive policing or the sharing and retention of the data produced.
The Guidelines also require the NYPD to perform audits of the Domain Awareness System; if the policy applies to the predictive policing system, that means the Department would be required to audit the system as well. The NYPD offered no audit documents, however, after confirming that it had done a diligent search for responsive records, which suggests either that it is not complying with its own policy or that it is continuing to withhold relevant materials. At the same time, the NYPD produced a paper co-written by Evan Levine, the NYPD’s Assistant Commissioner of Data Analytics, describing the predictive policing system and Domain Awareness System, which cites statistics about the effectiveness of their predictive policing algorithm. The NYPD must have run tests on their programs to come up with those statistics, but no records of these tests were ever released to the Brennan Center.
In the course of the August 2017 hearing, the NYPD’s attorney suggested that the police department regularly ignores FOIL requests until the requester gives up or files suit. While the Brennan Center is pleased that the Court ultimately granted the majority of the original request, the general lack of transparency around law enforcement practices, particularly with respect to the use of predictive policing software, is deeply concerning. Citizens have the right to know about the tools, costs, and standard practices of law enforcement agencies that police their communities. In this case, it took a year and a half and a lawsuit to ultimately get even a portion of the information that should have been made available to the public at the outset. As Judge Barbara Jaffe wrote in her opinion on the Brennan Center’s FOIL litigation, “governmental transparency is a transcendent virtue.” The Brennan Center is committed to continuing in its fight for government transparency, and looks forward to providing analysis of the documents produced as a result of this Order.