Skip Navigation
  • Home
  • Our Work
  • Research & Reports
  • Comment Submitted to the Privacy and Civil Liberties Oversight Board Public Forum on Artificial Intelligence in Counterterrorism and Related National Security Programs
Statement

Comment Submitted to the Privacy and Civil Liberties Oversight Board Public Forum on Artificial Intelligence in Counterterrorism and Related National Security Programs

On July 1, 2024, the Brennan Center and the ACLU submitted comments to the Privacy and Civil Liberties Oversight Board (PCLOB), urging them to undertake a deep-dive examination of the use of AI in counterterrorism and related national security programs.

Published: July 2, 2024

The use of AI for counterterrorism purposes raises immense risks to the rights and safety of people in the United States and abroad and urgently requires scrutiny and accountability. US national security agencies and the military are integrating AI into some of the government’s most consequential decisions: who it subjects to intrusive surveillance and searches, labels a “risk” or “threat” to national security, and even targets with lethal force.  

Even without AI augmentation, many of the government’s counterterrorism programs have not been meaningfully tested for efficacy, and are characterized by vague and overbroad standards, weak safeguards, and little to no transparency. Yet rather than fix these fundamental flaws before expanding these activities, in many areas, the real-world deployment of AI appears to be underway.   

The Brennan Center and the ACLU submitted comments recommending that PCLOB use its investigative powers to document exactly how AI has been integrated into the government’s counterterrorism programs. Both organizations also urged PCLOB to propose that the government cease AI use when: (1) the AI is not sufficiently tested; (2) it is unreliable or otherwise ineffective; or (3) it raises risks to privacy, civil liberties, civil rights, or safety that cannot be effectively mitigated. In addition, PCLOB should recommend increased transparency across all AI systems that come within its mandate, comprehensive testing of efficacy and risks, and increased resources and support for independent AI oversight.