PrivacyCon Illustrates the FTC’s Focus on AI and Automated Decision Making Systems
Last week we wrote about the Federal Trade Commission’s (FTC) seventh annual PrivacyCon—discussing overall takeaways from the conference and summarizing key takeaways the Children’s Privacy panel. This post—the last in our series on PrivacyCon 2022—looks at another notable panel: automated decision making systems (ADMS). Below, we recap the key takeaways from the panel and discuss how the issues raised at PrivacyCon are related to ongoing federal activity on ADMS and Artificial Intelligence (AI) at the FTC and beyond.
Key Takeaways from PrivacyCon Panel on ADMS
The panelists generally emphasized the potential harms of AI and ADMS, and their views of the challenges in developing and deploying effective accountability mechanisms. The panel moderator, Dr. Sarah Myers West from the FTC's Office of Policy Planning, started the panel discussion highlighting Rashida Richardson's research on algorithmic bias, which discusses the need for greater scrutiny of ADMS. Princeton computer science professor Arvind Narayanan argued that, while audits can be helpful to recognize bias in ADMS, they do not identify the root cause of the bias. He argued that outdated research models and data sets can make audits ineffective. Professor Michael Veale of University College London added that auditors and operators of ADMS need to fully understand its original design, because applying an ADMS system in a way that is different from its original purpose can causes misaligned results. He suggested that the attempt to scale ADMS quickly and beyond its original design framework may contribute to algorithmic harms.
The panel also discussed potential regulatory approaches. Academic author Ms. Inioluwa Deborah Raji suggested that AI accountability could be addressed through the development of a regulatory system similar to the one used to approve medical devices, which requires devices be tested and approved by the regulators prior to use by consumers. Other panelists discussed their views of the need for greater regulatory supervision and independent audits.
Broader Federal AI Efforts
This PrivacyCon panel highlights the FTC’s ongoing interest in AI and overlaps with a number of issues that the FTC is considering as part of its ongoing privacy rulemaking that it launched with the release of its Advanced Notice of Proposed Rulemaking (ANPR) on Commercial Surveillance. The ANPR includes questions about potential harms from ADMS errors or bias. The ANPR seeks comments on how much algorithmic error and discrimination is inevitable in ADMS and how those harms compare to the positive benefits from the use of algorithms. The questions in the ANPR also suggest that the FTC is considering greater regulation.
In addition to work at the FTC, the National Institute of Standards and Technology (NIST) is also considering issues related to trustworthy AI. It has issued two drafts of its AI Risk Management Framework (RMF), which is a document intended to outline voluntary approaches that AI developers and operators can implement to promote reliable, fair, and safe AI systems and manage potential risk. The White House also recently released a voluntary AI framework, the AI Bill of Rights, which sets out five fundamental principles for developers and users of AI systems.
The research highlighted at PrivacyCon is likely to continue to influence proposed approaches as the federal government looks more closely at accountability issues for AI and ADMS.
***
Wiley’s AI team assists clients in advocacy, compliance, and risk management approaches to AI technology and algorithmic decision-making. Please reach out to any of the authors with questions.