The use of AI for counterterrorism purposes raises immense risks to the rights and safety of people in the United States and abroad and urgently requires scrutiny and accountability. US national security agencies and the military are integrating AI into some of the government’s most consequential decisions: who it subjects to intrusive surveillance and searches, labels a “risk” or “threat” to national security, and even targets with lethal force.
Even without AI augmentation, many of the government’s counterterrorism programs have not been meaningfully tested for efficacy, and are characterized by vague and overbroad standards, weak safeguards, and little to no transparency. Yet rather than fix these fundamental flaws before expanding these activities, in many areas, the real-world deployment of AI appears to be underway.
The Brennan Center and the ACLU submitted comments recommending that PCLOB use its investigative powers to document exactly how AI has been integrated into the government’s counterterrorism programs. Both organizations also urged PCLOB to propose that the government cease AI use when: (1) the AI is not sufficiently tested; (2) it is unreliable or otherwise ineffective; or (3) it raises risks to privacy, civil liberties, civil rights, or safety that cannot be effectively mitigated. In addition, PCLOB should recommend increased transparency across all AI systems that come within its mandate, comprehensive testing of efficacy and risks, and increased resources and support for independent AI oversight.