Heading into 2024, Federal AI Activity Ramps Up After AI Executive Order
2023 has been a big year for AI with the landmark Executive Order for Safe, Secure, and Trustworthy Artificial Intelligence (EO) adding to the already busy and dynamic AI landscape. Issued less than two months ago, the EO has already spurred federal agencies into action with its dozens of directives to numerous agencies. These wide-ranging directives include developing strategies for AI safety, refining standards for AI management, and focusing agencies on managing risks of AI as they procure from the private sector. Whether or not directly regulatory, the federal agency work that has already started will form the nuts and bolts of AI governance for the public and private sector in the coming year. Here is a snapshot of some of the latest developments so far – with more to come in the new year.
Some Agencies Have Already Acted Quickly in Response to the EO
OMB’s Broad Agency Guidance. On November 1, just two days after the EO was issued, the Office of Management and Budget (OMB) released a Proposed Memorandum on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, which is based on the EO’s Section 10.1(b) directive to OMB to “issue guidance to agencies to strengthen the effective and appropriate use of AI, advance AI innovation, and manage risks from AI in the Federal Government.” The guidance – which is still in draft form pending response to public comments – would direct agencies to develop and publish individual AI strategies advancing AI use within the agency and would require minimum risk management practices for agencies to implement if the AI the agency is using falls within broadly defined categories of “rights-impacting AI” or “safety-impacting AI.”
At least in draft form, the OMB guidance proposes fairly rigid risk management approaches when AI is used in many contexts, which could be passed on to private sector suppliers of AI tools and technology to the government. The draft guidance also establishes a series of recommendations for managing AI risks in the context of federal procurement and states that OMB intends to develop a system for ensuring federal contracts align with the AI Executive Order. Comments were due on the draft guidance in early December, and OMB received close to 200 public comments. OMB’s final guidance is due within 150 days of the EO – by March 28, 2024.
CISA Roadmap for AI and Guidelines for Secure AI System Development. On November 14, the Cybersecurity & Infrastructure Security Agency (CISA) released its agency-wide strategy to promote beneficial uses of AI for cybersecurity while also protecting against and deterring AI-related cybersecurity risks, especially related to critical infrastructure. CISA’s Roadmap for AI discusses five efforts to be implemented across the agency, including responsibly integrating AI into its efforts to advance cybersecurity measures for critical infrastructure; assessing and making recommendations to protect critical infrastructure from malicious AI threats; and working with external partners domestically and internationally on key AI efforts.
Indeed, shortly after the release of its Roadmap, on November 26, CISA – together with the United Kingdom’s National Cyber Security Centre – announced the release of its Guidelines for Secure AI System Development, which is meant to promote secure-by-design principles for AI and help developers of AI systems make informed cybersecurity decisions throughout the development process. Overall, CISA’s work on AI security and safety promises to have broad impacts throughout the private sector, including but not limited to critical infrastructure. For example, when CISA released its Guidelines for Secure AI System Development, it noted that it “urge[s] all stakeholders – including data scientists, developers, managers, decision-makers, and risk owners make – to read this guidance to help them make informed decisions about the design, deployment, and operation of their machine learning AI systems.”
DoD Responsible AI Toolkit. On November 14, the Department of Defense (DoD) released its Responsible Artificial Intelligence (RAI) Toolkit, which serves as a user guide that incorporates its own AI Ethical Principles and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (AI RMF) into the development of AI products. The RAI Toolkit provides a step-by-step process for AI developers to follow when designing new AI systems for use by DoD. The RAI Toolkit starts with general information-gathering steps and ends with recommended evaluation processes to test an AI system’s robustness, resilience, and reliability. The DoD Toolkit will have a direct impact on companies providing AI-enabled technologies to DoD and may influence other agency approaches as well.
NIST AI RFI. Most recently, on December 19, NIST announced its release of a Request for Information (RFI). Through this RFI, NIST is seeking information to assist it in carrying out several responsibilities under the AI EO. For example, NIST asks for comments on approaches to generative AI risk management to inform its development of a generative AI companion to the AI RMF; use cases and best practices for red-teaming to inform establishing guidelines to enable developers of AI to conduct AI red-teaming tests; and tools, standards, and challenges for synthetic content creation, detection, labeling, and auditing to aid in the guidance it will create for agencies to authenticate content. Comments are due February 2, 2024.
Independent Agencies Have Acted in Parallel with the EO to Address AI
Independent agencies – which are not directly governed by the EO – have also been actively releasing new initiatives related to AI that complement the EO’s recommendations and guiding principles of ensuring AI safety and security and promoting innovation.
For example, on November 16, 2023, the Federal Communications Commission (FCC) released a Notice of Inquiry (NOI) to seek information on the implications of emerging AI technologies on robocalling. This directly followed the EO’s recommendation for the FCC “to consider actions related to how AI will affect communications networks and consumers, including by ... encouraging, including through rulemaking, efforts to combat unwanted robocalls and robotexts that are facilitated or exacerbated by AI and to deploy AI technologies that better serve consumers by blocking unwanted robocalls and robotexts.” The NOI is seeking comment on how to define AI as applied to the Telephone Consumer Protection Act (TCPA). Specifically, the NOI sought information on how AI can be used to: (1) protect consumers from unwanted robocalls and robotexts; (2) improve the FCC’s ability to enforce the TCPA; and (3) improve accessibility. The FCC received an initial round of comments on its AI/TCPA NOI in December, and will accept reply comments until January 16, 2024.
The Federal Trade Commission (FTC) has also made announcements relating to AI since the EO was announced, building on previous announcements over the last year. On November 16, the FTC released a voice cloning initiative aimed at encouraging the private sector to develop solutions to protect consumers from AI-enabled voice cloning harms. The FTC issued detailed rules for the challenge, which permits individuals, for-profit companies, nonprofit organizations, and others to submit ideas aimed at protecting consumers from AI-enabled voice cloning harms such as fraud. Additionally, on November 21, the FTC voted to pass a resolution to streamline its nonpublic investigations into products and services that use AI – signaling that the FTC will continue to scrutinize development and use of AI as part of its investigation and enforcement strategy under existing laws. And indeed, just this week, the FTC announced a landmark settlement agreement with a company regarding its use of AI and facial recognition technology. Under that settlement agreement, the FTC is banning the company from using facial recognition technology for surveillance purposes for five years.
What’s Next
The highlighted activity is just a small sample of the types of actions that agencies will take in the new year. Most immediately, the EO includes several 90-day deadlines, which land on January 28, 2024. Among these January directives, business can expect movement at the U.S. Department of Commerce regarding reporting requirements for companies developing, or intending to develop, dual-use foundational models.
More broadly, businesses should track a number of directives that involve AI governance standards and best practices that can apply beyond individual agencies. For example, as highlighted by NIST’s AI RFI, NIST is directed to issue a range of standards and frameworks that could be implemented by both public and private sectors in the development and use of AI, including the creation of guides that accompany the AI RMF and processes and procedures for AI testing of dual-use foundational models. Already, in the wake of the EO, NIST has established a AI Safety Institute and related Consortium to help develop tools to measure and improve AI safety and trustworthiness, and much more will come from NIST on this topic – as required by the EO and based on NIST’s long-standing work on AI risk management issues.
As 2023 wraps up, it is clear that the AI EO will carry the robust discussion of AI approaches into the new year. Companies that operate in this space – from companies that develop AI models to those that use them – should monitor the AI EO workstreams closely to assess their impact on the private sector broadly. There are already multiple opportunities for stakeholder engagement – including opportunities to file comments with NIST in response to its AI RFI (with comments due February 2, 2024) and with the FCC in response to its AI/Robocalling NOI (with reply comments due January 16, 2024) – with many more opportunities on the horizon.