Can government govern artificial intelligence (AI)? One of the central questions of AI governance surrounds state capacity, namely whether government has the ability to accomplish its policy goals. We study this question by assessing how well the U.S. federal government has implemented three binding laws around AI governance: two executive orders—concerning trustworthy AI in the public sector (E.O. 13,960) and AI leadership (E.O. 13,859)—and the AI in Government Act. We conduct the first systematic empirical assessment of the implementation status of these three laws, which have each been described as central to US AI innovation. First, we track, through extensive research, line-level adoption of each mandated action. Based on publicly available information, we find that fewer than 40 percent of 45 legal requirements could be verified as having been implemented. Second, we research the specific implementation of transparency requirements at up to 220 federal agencies. We find that nearly half of agencies failed to publicly issue AI use case inventories—even when these agencies have demonstrable use cases of machine learning. Even when agencies have complied with these requirements, efforts are inconsistent. Our work highlights the weakness of U.S. state capacity to carry out AI governance mandates and we discuss implications for how to
address bureaucratic capacity challenges.