This article first appeared at Just Security.
As the federal government pours billions of dollars into artificial intelligence (AI), it is also stepping up efforts to place meaningful restrictions on the development and use of the technology. In 2023, the government had ten regulations in place restricting how it uses AI – up from four in the previous year. One of the most closely watched regulations comes from the White House’s Office of Management and Budget (OMB), which released guidance in March prescribing minimum risk management practices for federal agencies’ use of rights- and safety- impacting AI. The long-awaited guidance is the most comprehensive attempt yet to distill calls for safe and responsible AI into an actionable framework for federal agencies.
The guidance expects agencies to proactively address systemic risks, biases, and harms in their use of AI – a fundamental shift in how many of them have approached their obligations to protect civil liberties and rights, which all too frequently take a backseat to operational concerns. Resourcing agencies not only to establish technical safeguards but also to take a broader view of AI’s impact on society is a crucial first step in making this change. Just as important, however, is effective oversight – putting in place a system of internal and external checks that incentivizes compliance and holds agencies accountable to the public.
The executive branch cannot do this work alone – lasting investments in compliance, expertise, and oversight also require comprehensive legislation. We explain below how the White House should lean on its budgetary and management powers to curb risky AI uses and acquisitions, and what agency leaders must do to shore up internal oversight. We also identify the resource and oversight gaps that are likely to persist despite these efforts, and the steps Congress should take to close them.
Resourcing AI Guardrails
Much scrutiny of how agencies implement the OMB guidance is likely to fall on agencies’ Chief Artificial Intelligence Officers (CAIOs) – agency-designated leads responsible for promoting AI innovation (i.e., identifying AI opportunities that will advance the agency’s mission and removing barriers to uptake of the technology) while mitigating its risks (i.e., overseeing AI impact assessments, maintaining the agency’s AI use case inventories, and ensuring compliance with other risk management practices).
These responsibilities are an enormous undertaking – and largely uncharted territory for agencies. Meaningful impact assessments – the centerpiece of OMB’s pre-deployment safeguards – examine not only the technical capabilities and limitations of AI systems, but also how they will practically affect people and society. Interrogating how facial recognition systems can exclude people from public services or amplify discriminatory surveillance, for example, goes beyond measuring whether they are less or more accurate for protected classes. Agency staff also must test how the technology performs in real-world conditions, such as in poor lighting or on blurry images; how overstretched processes for appealing agency decisions and other resource constraints might compound AI-related errors; and whether agency resources are better invested in risky and unproven systems or repairing existing dysfunctions in benefits programs and the criminal legal system.
Staffing agencies to conduct these assessments requires bringing inter-disciplinary experts into the fold: both technical AI talent and specialists in AI ethics and the broader social implications of the technology. Internal buy-in is also vital, not just from agency leaders making key decisions on AI priorities, but also frontline workers who would be most attuned to how these systems will work in practice. For example, benefits adjudicators are best placed to explain how integrating AI into the adjudication process might bias their decision-making, and whether safeguards to limit harmful bias and preserve human oversight are in fact working as intended.
Conducting this outreach, developing user-friendly assessment templates, training staff to critically examine AI risks, and setting up processes to monitor the performance of risk mitigation safeguards and unanticipated risks – getting all of this right requires significant time, money, and effort.
Resourcing compliance with OMB’s guidance, however, is fraught with challenges. The President’s budget for fiscal year 2025 requests Congress to allocate $300 million to addressing “major” AI risks – but the fine print indicates that this will be spread out over a five-year period. An additional $70 million has been requested to specifically support CAIOs, but this may only be sufficient for threadbare coordination and advisory functions. The Department of Homeland Security, for example, has requested $9.9 million to open its CAIO office and fund its activities – a rounding error in its $107 billion budget request. Brennan Center research has found that the department’s key oversight offices lack the staffing capacity to effectively monitor its sprawling operations and serve as a check on abusive and unconstitutional practices – and even they are funded at much higher levels than the CAIO. (The department has requested $18 million for its Privacy Office, $48 million for its Office for Civil Rights and Civil Liberties, and $233 million for its Inspector General).
Unless Congress directs further spending on risk management, the share of the AI budget to support this function will turn largely on how agencies exercise spending discretion, and OMB’s oversight of such spending. Subsequent budget cycles will also test the durability of the government’s commitment to AI guardrails. OMB works with agencies to draft budget requests that reflect presidential priorities – its willingness to condition agencies’ AI asks on the strength of their risk mitigation and monitoring protocols will depend on how highly AI harm prevention ranks on future presidential agendas.
This is likely to set up a highly politicized, behind-the-scenes tussle of values: will agencies bow to AI hype, accelerating adoption with perfunctory safeguards, or chart a more cautious path? CAIOs should – with OMB’s backing – press agency leadership to draw up budgets that prioritize comprehensive impact assessments and risk monitoring. But the responsibility of future-proofing AI guardrails from the vagaries of executive politics ultimately lies with Congress. A legislative framework for AI risk management is much harder to undo or de-fund than an executive order.
Overseeing Guardrails
Resourcing compliance is just part of the equation. Robust oversight is especially needed since OMB has empowered agencies with broad discretion over which systems are subject to its risk management practices. Agency CAIOs may waive compliance with these practices if they would pose risks to rights and safety or become an “unacceptable impediment to critical agency operations.” CAIOs are also responsible for assessing which systems the practices cover: if they find that a system is not “a principal basis” for an agency decision or action, risk management is not required. Agencies exercising national security functions have even more discretion: AI deemed to be a “component of a National Security System” is exempt from OMB’s guidance altogether and governed by a separate (and likely weaker) memorandum the White House is developing in secret.
We have already seen how these types of open-ended exemptions to regulatory guardrails undermine their effectiveness. Last year, New York City passed a law requiring employers to audit their use of automated hiring software for bias, amid growing evidence that such tools are cutting people off from jobs because of their age, race, gender and disability. But a Cornell University study of 391 employers found that less than five percent have posted audits. The researchers attributed this to a provision in the law that lets employers decide whether their hiring tools “substantially assist or replace” human decision-making – a discretionary assessment that parallels OMB’s.
Longstanding weaknesses with internal oversight exacerbate the risk that agencies will bypass OMB’s guardrails or reduce them to a box-ticking exercise. DHS’s Office for Civil Rights and Civil Liberties, for example, has not published an impact assessment of the department’s operational activities in over a decade. One of the few assessments it did publish rubber-stamped electronic searches at the border without even a reasonable suspicion of wrongdoing, despite their significant chilling effects on First Amendment activity. The department’s Privacy Office is similarly troubled, struggling to enforce privacy training and oversee the agency’s purchases of personal data from data brokers. These failures – stemming from the lack of leadership support, weak investigative powers, and limited access to operational decision-making – raise concern that some agencies are sidelining internal watchdogs while claiming the mantle of oversight.
OMB’s guidance requires CAIOs to report directly to top agency leaders and emphasizes that waivers and opt-outs are decisions solely for them to make – a possible attempt to elevate consideration of AI risks within agencies. But concentrating implementation and oversight within a single office can also turn it into a single point of failure. CAIOs are not immune from the pressures of agency politics – particularly when the need for caution and safeguards is perceived as clashing with critical agency imperatives. The reality that many CAIOs appointed so far are serving another operational function could also create conflicts of interest – such as between a Chief Information Officer’s understandable desire to update existing IT infrastructure with the latest AI capabilities and the CAIO’s responsibility to question their necessity.
To mitigate these pressures, agencies investing significantly in AI should establish CAIOs as standalone roles, as the Department of Health and Human Services and Department of Justice have done. Agency leadership should also fast-track the hiring of technical experts not just to bolster innovation with AI, but to support CAIOs in developing bias and usability testing protocols and other technical means of complying with OMB’s guidance. In large agencies with multiple components, agency leaders should embed CAIO staff in component offices leading AI initiatives, enabling close collaboration on impact assessments and other risk management practices. This will also help CAIOs obtain real-time updates of AI-related risks and harms as they transpire, and promptly mitigate them.
Ensuring that CAIOs are truly independent, however, will require legislative intervention. Codifying CAIO offices will provide a buffer against inevitable changes in agency leadership and presidential priorities. But statutory authorization is not a panacea and should serve as a vehicle for expanding their oversight powers and responsibilities. Congress should, for example, empower them to investigate unconstitutional or otherwise harmful practices enabled by agency uses of AI; require regular reporting on their progress implementing AI guardrails and major rights and safety-related incidents; and mandate disclosure of key risk management documentation such as impact assessments.
In the meantime, OMB can do more to shore up weaknesses in the current setup. It should conduct annual reviews of how each agency is implementing its guidance and develop evaluation benchmarks where possible – perhaps through publicly available scorecards that capture the quality of AI use case inventory reporting, the frequency of rights- and safety-related incidents, and how quickly these are detected and addressed. It should also scrutinize the use of waivers and other opt-out provisions, modifying the current guidance to limit their scope if there is evidence of abuse. Finally, it should leverage its influence over agency budget requests to ensure meaningful compliance with its guidance – such as by probing whether agencies are planning spending on risky and unproven technologies or persuading them to dial back investments in AI until they have a robust risk management framework in place.
In theory, Inspectors General offices (OIGs) – internal watchdogs installed by Congress at certain agencies to prevent and investigate fraud, waste, and abuse – can also play a useful role. Their deep familiarity with agency operations and programs provides much-needed context for understanding how introducing AI can mitigate or worsen existing risks. But OIGs are prone to poor management, chronic staff shortages, and a tendency to limit their mandate to internal and technical matters, inspiring little confidence that they can meaningfully oversee how agencies are using AI. At least two OIGs – at the National Security Agency and Social Security Administration – have also retaliated against whistleblowers.
Congress is responsible for rectifying this dysfunction and ensuring that OIGs themselves are subject to proper oversight. OIGs must also be adequately staffed and resourced to police the implementation of AI guardrails. In 2022, Congress asked the DHS OIG to identify the training and investments its staff requires to improve their understanding of AI governance, but it is unclear how these needs have been met.
The challenges of regulating rapidly developing technology with unpredictable effects have also prompted calls for Congress to establish new oversight bodies. Our Brennan Center and ACLU colleagues have argued for an independent authority dedicated to reviewing national security applications of AI, modeled after the creation of the Privacy and Civil Liberties Oversight Board to monitor post-9/11 counterterrorism programs. Proposals for a Food and Drug Administration-style authority to test and approve commercial AI models may also mitigate some of the risks with government uses of these models, while alleviating the burden on individual agencies to conduct testing from scratch.
Congress Must Act
So far, however, Congress has tinkered at the edges of meaningful AI regulation – requiring agencies to compile use case inventories that are light on the details that matter; exhorting them to uphold privacy and rights without specifying red lines or safeguards to protect these values; and commissioning guidance and reporting on risks and best practices without any promise of enforcement or follow-through. Bills that would establish more concrete safeguards – such as the Eliminating Bias in Algorithmic Systems Act of 2023 and the Federal AI Governance and Transparency Act – face an uncertain fate.
Short of enacting comprehensive AI regulation, Congress should press for greater transparency and accountability. The annual appropriations process, for example, can enable greater scrutiny of agencies’ AI spending by mandating them to produce and disclose detailed reports of how they are using and buying AI technologies, in line with OMB’s guidance. Congressional committees can also bring public pressure to bear on controversial and harmful government uses of AI, through holding hearings and requesting investigations by the Government Accountability Office.
These stopgap measures, however, are no substitute for a federal legislative framework. The OMB guidance is both a blueprint and a cautionary tale: while Congress should enact its risk management practices, it should place meaningful restrictions on the use of waivers and tighten the porous definition of rights- and safety-impacting AI. Funding compliance, strengthening internal oversight offices, and creating new AI watchdogs will also ensure that agencies have the necessary resources to implement essential safeguards, and are held accountable for lapses. AI safety is a whole-of-government effort – the stakes are simply too high to let agencies regulate themselves.