A recent ad touted “a software platform that enables lobbyists to complete the most time consuming, tedious parts of their daily workflow in just a few clicks,” adding:
Imagine using an AI application to read the entire 2,500+ page National Defense Authorization Act, and extract all the components of the bill relevant to each of your clients in mere seconds, with just a few clicks. Or sending a dozen emails to congressional staffers, along with a one-sheeter on your client’s particular issue, without ever touching your keyboard.
Lawmakers and congressional staffers, meanwhile, contend with an ever-growing volume of correspondence, increasingly complex legislative issues, and limited access to the modern tools that lobbyists use. The routine task of parsing legislation like the National Defense Authorization Act (NDAA) is a gargantuan undertaking. The asymmetry of lobbyists using new generative AI tools to digest the NDAA’s substance with “just a few clicks” while legislators and staffers are still reading and CTRL+F-ing their way through is striking.
Congressional capacity has been an issue for years. AI could exacerbate the problem by empowering outside interests while Congress stagnates — or it could help level the playing field. Of course, any organization adopting AI into its operations must beware the documented risks associated with its use, including everything from commercial chatbots’ potential inaccuracies and “hallucinations” to security and confidentiality concerns to the perpetuation of biases based on data that software is trained on. Accordingly, legislatures looking to incorporate AI tools must consider not just how those tools could increase efficiencies but also what guardrails are needed to evaluate performance and mitigate risks.
This essay assesses recent steps by Congress to establish policies governing the use of generative AI and to encourage the legislative branch’s responsible experimentation with these new technologies. It emphasizes the importance of a proactive approach in the context of the “pacing problem” — a term coined by legal scholar Gary Marchant to describe the ever-expanding gap between technological advancement (which is often exponential) and the ability of governing institutions to keep up with these changes (at their default linear pace). It also explores the advantages of using AI in the legislative process, including its potential to strengthen institutional knowledge, policy research, oversight, and public engagement. It then reviews some of the known risks associated with recent innovations in AI technology and presents recommendations that address these risks while capitalizing on the benefits. These recommendations apply to Congress and to other legislative bodies seeking to develop their own AI strategies.