Trump Administration Revamps Guidance on Federal Use and Procurement of AI
On April 3, the Office of Management and Budget (OMB) released two much-anticipated memos that will impact the use and procurement of artificial intelligence (AI) by the federal government, signaling an appetite to move quickly to set AI policy. The two memos—M-25-21: Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (M-25-21) and M-25-22: Driving Efficient Acquisition of Artificial Intelligence in Government (M-25-22)—were issued in accordance with President Trump’s January 23 Executive Order (“AI EO”) and rescind and replace two Biden Administration memos on the same topics.
As the first public deliverables from the AI EO, the OMB directives offer the first details and insights into this Administration’s approach to federal agency use and procurement of AI. Overall, the memos provide guidance aimed to advance U.S leadership in AI, drive AI innovation, and encourage agencies to use AI to increase quality and efficiency of government services, while at the same time “ensuring appropriate safeguards are in place to protect privacy, civil rights, and civil liberties, and to mitigate any unlawful discrimination.”
In particular, for what it deems to be “high-impact AI,” M-25-21 establishes minimum risk management practices, which include pre-deployment testing, AI impact assessments, and ongoing monitoring, among other things. While the memos outline a new and distinct approach, the risk-based approach builds on themes from the first Trump Administration, as well as the Biden Administration, regarding promoting responsible AI use within the federal government.
The memos will impact government contractors in a number of ways, including requiring new governance approaches and risk management requirements within federal agencies for contractors to navigate, but also by encouraging the use and procurement of new AI products and services and setting in motion initiatives to harness AI at individual agencies.
Below, we provide key takeaways and high-level summaries of the OMB memos.
The Memos Focus Broadly on “AI Systems,” with Heightened Requirements for “High-Impact AI.”
Much of the high-level framework of the new Trump Administration OMB memos focuses on AI systems that present some level of higher “risk,” which are subject to certain risk mitigation practices. While the previous Biden Administration memos focused on “rights-impacting” and “safety-impacting” AI, the latest Trump Administration memos focus on “high-impact” AI. In relevant part:
- “AI System” is defined to mean any “data system, software, application, tool, or utility that operates in whole or in part using dynamic or static machine learning algorithms or other forms of artificial intelligence,” but the term specifically excludes “common commercial product[s] within which artificial intelligence is embedded, such as a word processor or map navigation system.”
- “High-impact AI” is defined to mean “AI with an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on: 1. an individual or entity’s civil rights, civil liberties, or privacy; or 2. an individual or entity’s access to education, housing, insurance, credit, employment, and other programs; 3. an individual or entity’s access to critical government resources or services; 4. human health and safety; 5. critical infrastructure or public safety; or 6. strategic assets or resources, including high-value property and information marked as sensitive or classified by the Federal Government.” As was the case in the Biden Administration’s approach, the new Trump Administration approach outlines purposes that are presumptively high-impact while giving agencies discretion to make the final determination.
M-25-21 Outlines Requirements for Federal Use of AI.
M-25-21 sets out guidance for federal agency use of AI and rescinds and replaces the Biden Administration’s M-24-10: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. This memo is scoped to apply only to “new and existing AI that is developed, used, or acquired by or on behalf of covered agencies,” and explicitly carves out the use of AI in a National Security System. M-25-21 at 4-5.
This memo establishes three core directives to the heads of all Executive branch departments and agencies, including independent regulatory agencies:
- Remove barriers to innovation and provide the best value for the taxpayer. The memo declares that the benefits of AI have already been demonstrated, so it is now time for agencies to “to identify and remove barriers to further responsible AI adoption and application, where practicable. . . .” at 5. Specifically, the memo (a) requires that covered agencies develop agency AI strategies within 180 days; (b) encourages covered agencies to “coordinate internally and across the Federal Government on criteria for data interoperability and standardization of data formats as a means of increased AI adoption” and “identify and share commonly used packages or functions that have the greatest potential for reuse by other agencies or by the public;” (c) reminds covered agencies that “it is the policy of the United States to buy American and to maximize the use of AI products and services that are developed and produced in the United States”; (d) provides recommendations for the responsible procurement of high-impact AI capabilities (in addition to M-25-22, which was released in parallel); and (e) encourages covered agencies to prioritize recruiting, developing, and retaining technical talent in AI roles, including by leveraging trainings and resources to upskill existing staff promoting AI talent, and ensuring accountability. Id. at 5-9.
- Empower AI leaders to accelerate responsible AI adoption. This directive focuses on AI governance within federal agencies and sharing of best practices across the Federal Government. Specifically, the memo: (a) directs agency heads to retain or designate a Chief AI Officer (CAIO) within 60 days and to convene an agency AI Governance Board, which will be the “relevant agency officials to coordinate and govern issues related to the use of AI within the Executive Branch,” within 90 days; (b) requires agencies to take a series of governance-related steps, including developing compliance plans, updating agency policies, developing a generative AI policy, and updating AI use case inventories; and (c) establishes the Chief AI Officer Council, which will be convened by the OMB within 90 days and will serve as the primary interagency body to lead coordination across the Federal Government. Id. at 10-13.
- Ensure their use of AI works for the American people. Finally, the memo holds that agencies must ensure their AI use is “trustworthy, secure, and accountable.” Towards this goal, covered agencies must develop AI risk management policies, implement minimum risk management practices for high-impact AI, and “safely discontinue [the] use of [non-compliant] AI functionality.” at 13-14. Unless the agency CAIO issues a waiver of one or more of the requirements (which must be publicly released, as well as centrally tracked and reported to OMB), agencies must document implementation of its practices for high-impact uses of AI within 365 days, and they must be prepared to report them to the OMB as part of periodic accountability review, an annual inventory, or upon request. Id. at 14. The minimum risk management practices are listed as: (1) conduct pre-deployment testing; (2) complete AI impact assessment; (3) conduct ongoing monitoring for performance and potential adverse impacts; (4) ensure adequate human training and assessment; (5) provide additional human oversight, intervention, and accountability; (6) offer consistent remedies or appeals; and (7) consult and incorporate feedback from end users and the public. Id. at 15-17. Of note, this memo does not reference the National Institute of Science and Technology’s AI Risk Management Framework.
This memo became effective upon its release on April 3, 2025, meaning the clock is already ticking for federal agencies to implement it in accordance with the various timelines set forth by OMB.
M-25-22 Provides Guidance for Federal Agencies to Improve Responsible AI Acquisition.
The second memo, M-25-22, provides both requirements and recommendations for federal procurement of AI, rescinding and replacing M-24-18: Advancing the Responsible Acquisition of Artificial Intelligence in Government. This memo will go into effect on October 1, 2025, and applies to any contracts awarded or renewed after that date but excludes contracts for AI systems acquired for use as a component of a national security system.
There are three core policies that drive the requirements and recommendations in this memo: (1) ensuring the government and the public benefit from a competitive American AI marketplace; (2) safeguarding taxpayer dollars by tracking AI performance and managing risk; and (3) promoting effective AI acquisition with cross-functional engagement. In furtherance of these goals, M-25-22 directs agencies, among other things, to:
- Update internal procedures on acquisition consistent with the memo within 270 days. The memo details requirements and recommendations for agencies’ AI acquisition practices on the topics of identifying procurement requirements (including the likelihood of high-impact AI use cases), market research and planning, solicitation development, selection and award, contract administration, and contract closeout. Within these categories, there are some notable recommendations.
- For example, in the area of market research, the memo “strongly encourage[es]” agencies to use performance-based techniques based upon a statement of objectives instead of a potentially “over-limiting” statement of work and to include quality assurance surveillance plans to oversee such performance-based requirements. Promoting more use of performance-based contracting is a trend we are seeing across government.
- In the solicitation development area, the memo recommends including provisions that prevent “vendor lock-in.” Protecting against such lock-in appears in other recommendations throughout the memo, including as part of the contract terms and contract closeout.
- In the area of contract terms, the memo emphasizes the ability to test systems prior to award and then ensure that the agency has ongoing testing and monitoring rights. Thus, the memo anticipates that agencies will actively and continuously monitor AI systems for performance and cost-effectiveness. M-25-22 at 4, 6-12.
- Maximize use of U.S.-produced AI products and services.
- Comply with the privacy policies and processes in OMB Circular No. A-130.
- Ensure proper use of government data within AI systems while protecting contractors’ intellectual property rights. Of note, the memo directs agencies to revisit and, as necessary, update their processes for ownership of government data, particularly where government data is used to “train, fine-tune, and develop the AI system.” It also indicates that guidelines should ensure that government information “must only be collected and retained by a vendor when reasonably necessary to serve the intended purposes of the contract” and restricts use of “non-public inputted agency data and outputted results to further train publicly or commercially available AI algorithms,” without “explicit agency consent.” This will require any AI contractors, in turn, to have mechanisms to isolate government data from other training data and likely demonstrate that such data is not being used to train other models.
The memo also tasks the General Services Administration (GSA) with creating publicly available guide(s) to assist with the procurement of AI systems within 100 days and to develop a web-based repository to facilitate the sharing of information, knowledge, and resources about AI acquisition within 200 days.
Finally, while the memo does not provide extensive details on this point, it contemplates potential disclosure requirements for contractors regarding the use of AI, noting that “[a]gencies must be cognizant of the risks posed by the unsolicited use of AI systems by vendors and determine whether there are circumstances that merit including a provision in a solicitation requiring disclosure of AI use as part of any given contract's performance.”
***
Wiley’s Artificial Intelligence Practice counsels clients on AI compliance, risk management, and regulatory and policy approaches, and we engage with key government stakeholders in this quickly moving area. Wiley’s Government Contracts Practice advises contractors of all sizes on government acquisitions and compliance with government policies, including those related to AI. Please reach out to the authors with any questions.
To stay informed on all the directives and announcements from the Trump Administration, please visit our dedicated resource center below.