New Report from NTIA Calls for More AI Regulation
At the end of March, the National Telecommunications and Information Administration (NTIA) issued its long-awaited AI Accountability Policy Report (Report), which provides federal policy recommendations regarding accountability for artificial intelligence (AI) systems. The Report calls on federal policymakers to look for regulatory approaches – not just voluntary approaches – to improve AI accountability. The Report also provides important signals about what types of requirements and expectations companies who develop and deploy AI systems may be facing in the near future, including disclosures and audits. Indeed, we have already seen efforts at the federal and state level to put these kinds of requirements into law.
Specifically, the Report explains that “[t]o justify public trust in, and reduce potential harms from, AI systems, it will be important to develop ‘accountability inputs’ including better information about AI systems as well as independent evaluations of their performance, limitations, and governance.” The Report envisions government’s role in this accountability chain as “encouraging, supporting, and/or compelling these inputs,” and along those lines, divides its policy recommendations into three categories: Guidance, Support, and Regulations. Below, we provide a high level summary of each of the recommendations.
Policy Recommendations Regarding Guidance
The Report provides three categories of recommendations under its Guidance category: (1) audits and auditors; (2) disclosure and access; and (3) liability rules and standards.
Audits and auditors. The Report recommends that federal agencies create guidelines for AI audits and auditors using existing or new authority. Such guidelines should cover: risk mitigation and management, including harm prevention; data quality and governance; communications; and governance or process controls.
Disclosure and access. The Report recommends that federal agencies use existing or new authority to standardize information disclosures. Among other suggestions, the Report includes an AI “nutrition label” as a potential method for relaying the disclosed information.
Liability rules and standards. The Report encourages federal agencies to work with stakeholders to make recommendations about applying existing liability rules and standards to AI systems and, when appropriate, developing new liability rules.
Policy Recommendations Regarding Support
The Report provides two categories of recommendations under its Support category: (1) people and tools; and (2) research.
People and tools. The Report recommends that federal agencies support and invest in technical infrastructure, AI system access tools, personnel, and international standards to promote accountability. It also recommends that Congress allocate National AI Research Resource (NAIRR) funds to contribute to resources including: a test for equity, efficiency, and other attributes and objectives; compute and cloud infrastructure required to do rigorous evaluations; access to AI system components and processes for researchers, regulators, and evaluators; independent red teaming support; and international standards development.
Research. The Report recommends that federal agencies conduct and support more research and development related to AI testing and evaluation. Proposed research would include exploration of (1) the creation of reliable, widely applicable evaluation methodologies for model capabilities and limitations, safety, and trustworthy AI attributes; (2) durable watermarking and other provenance methods; and (3) technical tools that facilitate researcher and evaluator access to AI system components in ways that preserve data privacy and the security of sensitive model elements, while retaining openness.
Policy Recommendations Regarding Regulations
The Report provides three categories of recommendations under its Regulatory Requirements category: (1) audits and other independent evaluations; (2) cross-sectoral government capacity; and (3) contracting.
Audits and other independent evaluations. The Report recommends that federal agencies use existing or new authority to require independent evaluations and regulatory inspection of high-risk AI model classes and systems. Federal agencies are also encouraged to coordinate with regulators in non-adversarial countries for alignment of inspection regimes. The Report acknowledges that “[i]t may not currently be feasible to require audits for all high-risk AI systems because the ecosystem for AI audits is still immature.”
Cross-sectoral government capacity. The Report recommends that the federal government strengthen its capacity to address cross-sectoral risks and practices related to AI. Recommended common baseline requirements that can apply across sectors include: (1) a national registry of high-risk AI deployments; (2) a national AI adverse incidents reporting database and a platform for receiving reports, and a national registry of disclosable AI system audits; (3) coordination of, and participation in, audit standards and auditor certifications; (4) pre-release review and certification for high-risk deployments and/or systems or models; (5) collection of periodic claim substantiation for deployed systems; and (6) coordination of AI accountability inputs with agency counterparts in non-adversarial states
Contracting. The Report recommends that the federal government leverage its purchasing power to shape marketplace standards. In this respect, the Report suggests that the federal government require government suppliers, contractors, and grantees to adopt sound AI governance and assurance practices for AI used in connection with a contract or grant, including using AI standards and risk management practices recognized by federal agencies, as applicable.
***
While the Report itself does not have immediate or direct impacts on the private sector, it provides a roadmap for federal agencies to use in establishing guidance and rules around the development and deployment of AI systems. In particular, companies developing and using AI should pay close attention to the recommendations around potential auditing and assessment obligations, which could lead to new requirements and obligations for certain AI deployments.
***
Wiley’s Artificial Intelligence practice counsel clients on AI compliance, risk management, and regulatory and policy approaches, and we engage with key government stakeholders in this quickly moving area. Please reach out to a member of our team with any questions.