July 01, 2024
OMB Imposes Comprehensive AI Use, Procurement, and Risk Management Requirements


To address AI governance, risk assessment, and procurement across the federal government, the Office of Management and Budget (OMB) set requirements for federal agencies in a March 28 policy memorandum titled “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence” (M-24-10). The OMB memo expands on directives set by President Biden’s AI Executive Order of October 2023, E.O. 14110, “Safe, Secure, and Trustworthy Development and Use of AI.” E.O. Section 10 states that the OMB Director “shall issue guidance to agencies to strengthen the effective and appropriate use of AI, advance AI innovation, and manage risks from AI in the Federal Government” within 150 days. OMB published a draft of its memorandum on November 3, and received 196 comments from companies, associations, interest groups, and individuals. Neither E.O. 14110 nor the OMB Memorandum apply to national security systems.
This memo directs agencies to advance AI governance and innovation while managing risks from using AI in the Federal Government, particularly those affecting the rights and safety of the public.
OMB Director Young,
Memo M-24-10
The E.O. and the memo outline a comprehensive, government-wide AI framework that encourages the transformative advances AI may bring while considering ethical issues and social mores and protecting fairness, equity, privacy, civil rights, and safety. The memo focuses on how government agencies are using AI to “inform, influence, decide, or execute agency decisions or actions.” Examples include the Department of Homeland Security using facial recognition technology, the IRS evaluating tax returns, or National Oceanic and Atmospheric Administration using AI to analyze urban heat islands.
The memo sets requirements in three areas: Governance, Advancing Innovation, and Managing Risks. Agencies must develop an enterprise strategy to advance AI use and “reduce barriers to the responsible use of AI, including barriers related to IT infrastructure, data, cybersecurity, workforce and…generative AI.” Balancing innovation and opportunity against risk and protection is core to the government’s AI policies, and to the memo and E.O. Congress is grappling with the same issue—balancing opportunity versus risk—as are governments worldwide.
TSA may soon look to AI algorithms, and particularly facial recognition, powered by AI, to identify security threats among the traveling public and enhance the prescreening process…while these AI powered systems offer the promise of increased security and efficiency, they also bring significant risks that Congress must carefully assess. For instance, AI powered facial recognition systems capture and store images of Americans and foreign travelers, which presents substantial privacy concerns.
Chairman Green (R-TN), House Committee on Homeland Security,
May 22
WHAT IS COVERED BY OMB REQUIREMENTS?
The new requirements focus on “the specific risks from relying on AI to inform or carry out agency decisions or actions” and are additional to any existing risk management, privacy or information security requirements. The OMB requirements apply to “new and existing AI that is developed, used, or procured on behalf of covered agencies,” but not national security systems.
GOVERNANCE
Chief AI Officer: Each agency had to appoint a Chief AI Officer (CAIO) and convene an agency AI governing body by May 31. Some agencies promoted from within, others hired for the job, and some “dual-hatted” the CAIO role with other technology leadership positions. In April, the Department of Defense (DOD) appointed Dr. Radha Plumb, a former Deputy Under Secretary of Defense for Acquisition and Sustainment, as CAIO. VA named CTO Charles Worthington as the VA CAIO in February, who now holds both jobs. In December, the Department of Energy announced its CAIO, Dr. Helena Fu, who will head up a new organization, the Office of Critical and Emerging Technology.
Compliance Plans: Each agency must report its compliance with OMB’s requirements in September and biannually afterwards. The plan must be posted, state how the agency is meeting OMB AI use requirements, and provide the agency’s compliance and reporting guidelines. Conversely, the agency can attest that it does not use AI under the terms set.
Use Cases: Annually, each agency, excluding DOD and Intelligence Community, must inventory AI use cases, report them to OMB, and post them. Starting in 2024, each use case must also assess whether the use cases could impact safety or civil rights (definitions follow). DOD is required to collect information and report in aggregate. OMB’s draft guidance for agency AI reporting this fall is available here; OMB plans to issue more detailed instructions.
ADVANCING INNOVATION
OMB encourages agencies to use AI to increase mission effectiveness, benefit the public, be more efficient and improve service overall. Agencies are tasked to strengthen AI talent and improve procurement and AI development.
AI Plan: Agencies have a March 2026 deadline for developing a full AI agency strategy that includes: most impactful AI uses; a current capabilities assessment; governance plans; enterprise capacity (including data storage, compute ability, and cybersecurity); plans for AI tools to support R&D; the state of the AI workforce; and prioritized planning for future AI investment in the budget.
BARRIERS TO INNOVATION
IT Infrastructure: Agencies are encouraged to provide the necessary high-speed computing and other IT infrastructure required for AI developers and testing.
Data: Data to be used to train or test AI must be evaluated for quality, presence of bias, and should be representative of the population. Agencies need to be able to share, curate and manage access to data, from both a technical and legal standpoint.
Cybersecurity and Generative AI: Agencies should update cybersecurity practices to address AI applications, and assess the potential use of generative AI in their mission area, while setting up safeguards for its use.
Collaboration: Agencies “must proactively share their custom-developed code…for AI applications in active use and must release and maintain that code as open source software on a public repository” with limited exceptions. Data sharing and release should be a priority when procuring custom-developed code for AI or enrichments such as labeling.
MANAGING RISKS
OMB requires all agencies (excepting the Intelligence Community) to implement the following “minimum practices” to manage risks from safety-impacting and rights-impacting AI. Agencies must report on their compliance and terminate any non-compliant AI uses by December 1. Lists of AI applications assumed to be safety or rights-impacting, and full definitions of “safety-impacting AI” and “rights-impacting AI”, are in the memo’s Appendix I.
Definition excerpts:
- Safety-impacting AI is “AI whose output produces an action or serves as principal basis for a decision that has the potential to significantly impact the safety of” human life or well-being, climate or environment, critical infrastructure, or strategic assets or resources.
- Rights-impacting AI is “AI whose output serves as principal basis for a decision concerning a specific individual or entity that has…a significant effect on that individual or entity’s civil rights, civil liberties, or privacy;” equal opportunities; or access to government resources.
Minimum Practices: Before using new or existing safety or rights-impacting AI practices, agencies requirements include completing an impact assessment, evaluating the intended purpose and potential risk, and the characteristics of AI training data; testing the AI application in a real-world environment; independently evaluating the application and providing input to Authority-to-operate determination; conducting regular evaluation and mitigating risks; training and including fail-safe procedures.
In addition, for rights-impacting AI, OMB imposes a second set of minimum practices, including assessing the AI’s impact on equity and fairness and getting feedback from affected communities, and maintaining a process for remedies/appeal and opt-out when appropriate.
Procurement of AI: The memo provides recommendations regarding procurement; OMB will later be developing a separate instruction to assure that federal procurement of AI complies with the E.O. and the guidance in the OMB memo.
Recommendations include requiring transparency and information from vendors, such as “adequate information about the provenance of the data” used to train the AI, evaluating claims from the vendor about effectiveness and risk management, testing AI in the particular use environment, and considering contract provisions that incentivize improvement. OMB stresses that “Any data used to help develop, test, or maintain AI applications, regardless of source, should be assessed for quality, representativeness, and bias.” Further, OMB wants interoperability prioritized and full and open agency competitions. The Federal Government should retain rights to its data and assure protection of that data from disclosure, protect biometric data fully, and set additional safeguards for generative AI. Last, OMB encourages agencies to assess products and services for environmental impact/energy use.
WHAT TO WATCH FOR
This summer, agency CAIOs will have their first chance to influence the federal budget as the administration ramps up the FY26 budget development process. November’s election results may change those priorities. In September, agency AI compliance reports and inventory of AI use cases are due. Expect OMB to issue detailed guidance for agencies on AI procurement requirements soon. Comprehensive agency AI reports are due in March 2025.
Congress is addressing AI issues, with hearings, legislation, and bipartisan approaches. On June 14, Reps. Garbarino (R-NY) and Connolly (D-VA) introduced the AI Leadership to Enable Accountable Deployment (AI LEAD Act), H.R. 8756, a companion to a 2023 Senate bill, S. 2293, to codify the new CAIO role and its responsibilities. On June 11, Senators Peters (D-MI) and Tillis (R-NC) introduced S. 4495 to address Federal government procurement, development, and use of AI. Sen. Peters noted the need for pre-purchase risk assessment and emphasized the necessity to push for “more flexible, competitive purchasing practices” via pilot procurement programs.
The Biden Administration is moving ahead with or without Congress, but AI could provide opportunities for bipartisan approaches that support innovation, government efficiency, fairness and equity, and U.S. competitiveness. Federal AI investments and policy guidelines have the potential to drive both federal AI contracting and changes in the commercial sector approach to AI.