This article first appeared at Just Security.
On May 22, the House Committee on Homeland Security will hold a hearing on how the U.S. Department of Homeland Security (DHS) should use artificial intelligence (AI) to “defend and secure the homeland.” The topic is fitting: DHS has promised aggressive adoption of AI and other emerging technologies. Within the last year, DHS Secretary Alejandro Mayorkas appointed a Chief AI Officer, issued a “roadmap” to pilot the use of AI in investigations, disaster planning, and training immigration officers, and published a departmental policy on AI and another on facial recognition. DHS has launched a hiring surge, bringing on 50 AI experts to form its “DHS AI Corps.” Throughout, Secretary Mayorkas has insisted the Department will respect “individuals’ privacy, civil rights, and civil liberties” and placed the Officer for Civil Rights and Civil Liberties and Privacy Officer in key roles in a DHS task forceto review potential uses (and abuses) of AI.
This flurry of announcements would suggest that algorithmic tools and automation are new for the Department, but it has in fact long relied upon them – and struggled to track and implement them properly. In February, the Government Accountability Office found that DHS’s public accounting of how it uses AI is incomplete and suggested the Department lacks a cohesive method for identifying and communicating what programs use AI.
And the Department has regularly rolled out unproven programs that rely on algorithms and risk the rights of the tens of millions of Americans. For instance, as our Brennan Center for Justice colleagues have documented, the screening, vetting, and watchlisting regimes that are supposed to keep tabs on potential terrorism appear never to have been tested. Instead, a recent study has shown a strong bias in the government’s terrorism watchlist, which is composed almost entirely of Muslim names. A recent in-depth review of these operations by the Senate’s homeland security committee found that the program also provides de minimisredress. DHS runs sweeping social media monitoring programs that collect information on Americans’ political views and activities. They too have demonstrated no security value. DHS’s use of facial recognition technology, such as the CBP One application in border security operations, has stifled asylum seekers’ ability to make their claims.
As it expands its use of AI, DHS should leave behind this laissez faire approach. President Joe Biden’s executive order on AI and recent White House guidance on the guardrails federal agencies should adopt when developing and using AI provide the Department with a toolkit to ensure that its use of AI is fair, effective, and safe. Secretary Mayorkas and the Chief AI Officer should apply the guidance fully to achieve these aims.
Below, we provide policy recommendations for how to promote the fairness and efficacy of DHS’s AI systems, while enhancing oversight and transparency.
Fully Implement the White House Guidance Across all DHS Programs
To start, DHS should follow the direction in the White House guidance not to pursue AI tools where their risks to rights and safety outweigh potential benefits. AI should also be off the table if effective mitigation of risks does not exist, or the efficacy of a tool or the augmented program is unproven—or even disproven. For instance, DHS should not further build out its social media surveillance programs with AI because the underlying initiatives regularly harm rights while offering little upside. And screening and vetting programs—untested, unproven, and potentially unprovable as executed today—too should not receive the AI treatment. DHS needs to get these programs right before further powering them with AI.
As has been explained previously in these pages by one of us, the White House has created a two-track system for addressing AI risks. The guidance it has issued thus far does not cover broadly defined “national security systems,” which will be governed by a separate and likely weaker memorandum that the White House is developing in secret. Existing guidance also specifically exempts the agencies that are part of the U.S. Intelligence Community from most requirements.
The White House guidance, however, encourages these agencies to implement the baseline standards it has promulgated—and DHS should do so across its operations. Those standards include commonsense measures such as an assessment of a tool’s benefits and risks and whether the latter can be mitigated, testing and independent evaluation, user training requirements, and consideration of equity, fairness, and input from affected community groups.
Secretary Mayorkas should recognize the broad applicability of these safeguards and issue a policy directive that requires the two parts of DHS that are part of the intelligence community—its controversial Office of Intelligence and Analysis (I&A) and elements of the U.S. Coast Guard—to comply with them. This is particularly important for I&A: Unlike many parts of the intelligence community, this office has a strong domestic bent and broad authorities that are easily abused, as shown by its targeting of protestors and journalists during 2020 racial justice protests.
Moreover, privacy documentation suggests I&A relies on unproven and opaque systems that would benefit from the type of examination and efficacy testing detailed in the White House guidance. In the same vein, the policy directive should require that DHS AI systems that fall under the definition of national security system abide by these standards. This is particularly important for dual use systems, AI that qualifies as a national security system and is also used for the Department’s various enforcement functions. Those functions impact people’s rights and put them in criminal jeopardy, regardess of where an AI tool is technically housed.
Minimize Exemptions and Waivers
Deviations from the baseline standards set out in the White House guidance should be extremely rare. This means that, for example, the Chief AI Officer should act with utmost caution when exercising his authority to override White House designations of certain types of AI as presumptively rights-impacting (e.g., facial recognition) or to decide that a particular AI application does not match the definitions of “safety-impacting AI” or “rights-impacting AI.”
Similarly, the officer should only rarely exercise the authority given to him to waive safeguards on the grounds that not doing so would “increase risks to rights and safety overall” or “would create an unacceptable impediment to critical agency operations.” Waivers should never be granted for new technology as those safeguards ensure technology is in working order, tested and proven, documented, and rights-protecting, all baseline standards. For AI already deployed by the Department, agencies’ use of faulty or biased technology should not be allowed to continue indefinitely through waivers, which should be granted only for a limited time and with a concrete plan for bringing the program into compliance.
Maximize AI Safety Resources
The Secretary should also ensure that the Department’s Chief AI Officer is set up for success in managing AI risks. His office must include a significant number of staff with experience in making and implementing policies that implicate decisions about “equity,” risk to safety and rights, and agency compliance with an emerging, likely complex regime. While DHS’s privacy and civil rights offices will provide input on these matters, they are not sufficient. Not only do these small offices already carry significant workloads, but they have also faced severe challenges in serving their functions, as detailed in this Brennan Center report.
The Secretary should ensure that the Chief AI Officer is properly resourced to carry out these critical functions and arrange to add staff with the knowledge and skills to advise the officer on these aspects of his job. The President’s budget has earmarked $300 million across the federal government “to address major risks and to advance [AI’s] use for public good,” spread out over five years, and $70 million to open the government’s AI offices, providing a potential pool of funding for staffing the Chief AI Officer. But given the complexity of the undertaking, oversight is likely greatly underbudgeted, leaving those charged with overseeing AI deployment with far fewer resources than needed.
Nor does it seem that DHS has itself figured out how much it intends to allocate to these critical tasks. DHS’s 2025 proposed budget seeks $5 million to staff the Chief AI Officer, while another section of the proposal suggests that $9.9 million will pay for two full-time employees. Even with personnel or funds assigned from other parts of the Department, this would constitute serious underfunding.
Strengthen Accountability and Transparency
Ensuring accountability for DHS programs remains a longstanding challenge. Internal and external oversight mechanisms are far from robust and expansive authorities too often allow abuse to go unchecked. To ensure that the Chief AI Officer can fulfill the responsibility of ensuring that AI is well-tested, thoughtfully developed, and protective of rights and privacy, Secretary Mayorkas should separate the roles of Chief AI Officer and Chief Information Officer. Otherwise, the mission-oriented technical execution duties of the latter can too easily overwhelm the protective duties of the Chief AI Officer. (Other agencies, such as the Department of Justice and Department of Health and Human Services have already separated the functions.)
As DHS’s Chief AI Officer carries out his mandate to implement White House guidance, he should hew to the principle of maximum transparency. For example, the officer is charged with deciding which AI tools should be included in the inventory of use cases and which the agency can hide from the public. According to the White House guidance, an agency need not include AI uses cases in its inventory if “sharing would be inconsistent with applicable law and governmentwide policy.” Based on the draft version of the guidance and a transparency requirement in another part of the final guidance, “applicable law and governmentwide policy” likely refers to rules that limit disclosure, such as those “concerning the protection of privacy and of sensitive law enforcement, national security, and other protected information.”
The Chief AI Officer should ensure that the inventory includes all AI systems used by DHS. System level transparency is achievable without revealing sensitive information as shown in other contexts: privacy documentation discloses information about many of these systems and, elsewhere, local laws require police departments to disclose information about their surveillance technologies. And as one of us has written previously, where information about a system is classified, DHS and the Office of the Director of National Intelligence should undertake a mandatory declassification review and release as much information as possible. If releasable information is fragmentary or unintelligible due, for example, to redactions, DHS should issue a public summary. This process too has proved workable in other circumstances, such as making public Foreign Intelligence Surveillance Court opinions.
At the same time, the Chief AI Officer should improve the information reported in the inventory of use cases, initially published in compliance with a 2020 executive order on AI. However, the summaries in the inventory are threadbare, often lacking even the detail in other publicly available privacy documentation. As the Chief AI Officer oversees the inventories required by the latest White House guidance, he should greatly expand the summaries, explaining how the tools work, their risks, mitigation steps, applicable legal authority and policy framework, and who they target, giving the public enough information to evaluate any claims about these matters. In addition, the officer should provide more specific information about broad catch-all categories in the inventory. For example, the current inventory says that the Department will use commercial AI for such broad uses as text, image, and code generation. More is needed to explain to the American public what this means in practice.
As it looks to develop new technologies, DHS is at a crossroads: It can choose to carry forward its history of bias and lack of transparency, hiding behind exemptions and waivers. Or the Department can usher in greater openness, proven methods, and affirmative mitigation of detrimental impacts of AI on rights and privacy.