Federal Efforts on AI Are Picking Up
Under the President’s Executive Order on AI, the National Institute of Standards and Technology (NIST) is tasked with putting together a plan for federal engagement on developing standards for deploying AI technologies, and the agency confirmed Thursday that it is moving quickly to do so. In a presentation to the Information Security and Privacy Advisory Board, a key NIST representative leading the project outlined the next steps in the process, including an RFI that will be released shortly and a workshop at the end of May. These will be the first steps in a multi-stage process to establish a framework for the federal approach on AI standards, and a key opportunity for stakeholders to engage.
What the Executive Order requires NIST to do. Under Section 6(d) of the Executive Order, NIST is responsible for producing “a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies.” This plan must include identification of federal priority needs for standardization of AI systems development and deployment, and opportunities for U.S. leadership in standardization for AI technologies. The timeframe is short: the plan must be produced by August 10 of this year. And NIST must engage (as it is used to doing) with the private sector, non-governmental entities, and other stakeholders during the process.
Given the short timeline, NIST is moving quickly to solicit public comment. NIST has indicated that an RFP likely will be released in the coming weeks, and that a public workshop is being planned for late May. There will also be additional opportunities for stakeholders to engage, through meetings or possibly other events with NIST.
What is NIST’s role? The Executive Order puts NIST at the forefront of developing standards for AI implementation on the federal level. NIST has considerable experience with developing standards in areas that are related to AI – for example, its work on cybersecurity and privacy frameworks. AI involves those and other issues, like potential biases and explainability, that will be front and center as NIST develops its approach.
NIST’s standards on AI have the potential to be enormously influential. They can be applied to companies dealing with the federal government, and they will form the basis for US engagement on similar projects worldwide. Additionally, the accelerated timeline likely will put NIST ahead of what other agencies have been doing in this area. The Federal Trade Commission, for example, is nearing the end of a series of hearings on issues that range from privacy to data security to AI, but it remains to be seen whether the agency will put out guidance specific to AI.
What will be NIST’s framework? NIST has noted that it is focusing on “Trustworthy AI,” with the idea that standards can help cultivate trust in AI, which a key element in wider adoption of the technology. Rather than try to find one definition of “trustworthy” AI, NIST indicated that that it will look at factors like accuracy, reliability, privacy, robustness, and explainability. In all of these areas, the standards adopted – whether risk-based and light-touch or more prescriptive and one-size-fits-all – will bear on the way AI technology is adopted. Moreover, NIST clearly wants to get stakeholder input before nailing down the framework and key issues on which it will focus.
As the process unfolds over the next few months, we will have more on NIST’s approach and opportunities for stakeholders to engage on federal AI policy.