FTC Flags Key Issues Agency is Watching in Artificial Intelligence
Last week, the Director of the FTC’s Bureau of Consumer Protection, Andrew Smith, outlined extensive thoughts on how the agency is approaching important issues in artificial intelligence (AI), in a detailed blog post. The post follows an FTC hearing on AI and algorithms in late 2018. While it is not an official Commission report, the Director’s thorough comments provide a roadmap for how FTC staff are likely to approach a number of critical issues involving AI and consumer protection. Here are some highlights and what they mean for businesses using AI.
Transparency in AI Interactions. The Director indicated that the FTC may challenge companies that deploy consumer-facing services that use AI, but mislead consumers “about the nature of the interaction.” The examples in the post focus on AI applications that are basically fakes, such as fake dating profiles or fake social media followers. But the post also warns that companies using AI chatbots – which are increasingly common tools – should be careful not to mislead consumers. While this leaves some ambiguity about the rules of the road for disclosures around chatbots and similar AI tools in practice, one key question under FTC law is whether the use of AI in these circumstances is “material” to consumers – that is, likely to affect their behavior or decision to use a service.
Transparency in Data Gathering. The Director also indicated that the agency might challenge the gathering of data to power AI, if it believes there was a misrepresentation about data collection. The post cites, as a specific example, statements that might mislead consumers about whether facial recognition services are “on” or “off” by default. Companies that use automated tools to gather data used for AI algorithms will need to tread carefully – even if there is no affirmative obligation (such as under state law) to explain all the ways that data is collected and used.
Explainability. The Director emphasizes that companies should explain AI decisionmaking in certain circumstances: if a company “den[ies] consumers something of value based on algorithmic decision-making, explain why,” and if a company “use[s] algorithms to assign risk scores to consumers, also disclose the key factors that affected the score, rank ordered for importance.” Each of these mirrors requirements placed on companies in the credit area, but the basis in FTC law more generally is not clear from the post. The Director also advises that if automated tools are used to “change the terms of a deal,” that should be disclosed – citing a case that involved an alleged failure to disclose use of a behavioral scoring model in setting credit limits.
Fairness and bias. The Director cites existing antidiscrimination laws such as Equal Credit Opportunity Act (ECOA), and advocates “rigorously testing” algorithms. The post also advocates for consumers’ rights to dispute the accuracy of information used in AI decisions, though as a legal matter, these rights are limited to certain contexts, such as those governed by the Fair Credit Reporting Act (FCRA). It is noteworthy that, in this area, Commissioner Rebecca Slaughter has been pushing for the FTC to take actions, even outside of those statutes, against “data abuses,” which she has suggested could include biases resulting from algorithmic decisionmaking. The Director’s post does not directly address this.
Accountability. The post makes at least two points on accountability that are worth watching: First, the Director encourages companies to protect their algorithms from “unauthorized use” (such as in the case of voice cloning technologies). This has implications for companies that develop AI algorithms and license them to third parties, or otherwise exercise some downstream control over their use. The extent of potential legal liability can vary under the circumstances, but AI developers should keep in mind how their products can be used by their potential customers and business partners. Second, the Director encourages the use of independent third-party standards as accountability mechanisms, noting that “outside tools and services are increasingly available as AI is used more frequently, and companies may want to consider using them.” Companies using AI may want to consider whether third-party testing and validation are appropriate to use to mitigate risks of issues like bias. (In certain limited circumstances, such as government-run facial recognition in Washington, third-party testing will soon be required.)
Rules specific to credit and consumer reporting. Financial services providers often have been ahead of the curve in dealing with regulatory obligations involving AI and machine learning, particularly in determining how such technologies are regulated by statutes such as ECOA and the FCRA. The Director’s post reiterates a number of points about using big data analytics from the FTC’s 2016 Big Data Report, including the potential need to send adverse action notices in certain circumstances that fall under the FCRA. As the post notes, the FCRA applies to certain decisionmaking not just about credit, but also about employment, insurance, housing, and other areas.
On a big picture level, the Director’s post outlines the areas the FTC will be likely to scrutinize around AI in the near term. Indeed, given that almost all of the agency’s investigations are non-public, they signal what issues the agency may be evaluating already. Businesses harnessing rapid advancements in AI will want to keep the Director’s comments in mind and continue to watch closely.