Into the AI Future: What a Relator’s Use of AI in a Recent FCA Case Portends About the Future of Whistleblowing
A recent California False Claims Act (CFCA) settlement highlights how relators are turning to artificial intelligence (AI) tools to allege fraud and other issues. By understanding how AI may be used by relators or other whistleblowers, companies can better prepare to address these issues in the future.
City of San Deigo ex rel. Blackbird Special Project v. Invitation Homes
In Invitation Homes, the relator alleged that the defendant, a rental company, renovated thousands of single-family homes in California, but intentionally and systemically failed to obtain necessary building permits. The defendant allegedly took such actions to get renovated houses on the market quickly and to avoid permit fees and increased property taxes.
The relator relied heavily on proprietary software that implemented AI and machine learning to identify the alleged fraud. The relator claimed that its proprietary software scoured rental listing websites for homes owned by the defendant and accessed pre-renovation images of homes from real estate websites. The relator also alleged that it used its proprietary software to compare pre-renovation images with post-renovation images.
The defendant filed a motion to dismiss arguing that the complaint failed to state a claim and otherwise lacked the requisite particularity under Federal Rules of Civil Procedure 9(b) and 12(b)(6). Among other things, the defendant argued that the relator had failed to adequately explain its data analysis and use of its proprietary software. The defendant also argued that the CFCA’s public disclosure provisions, which are similar to the federal False Claims Act’s provisions, barred the relator’s claims because the allegations of fraud were publicly disclosed in the various rental listing and real estate websites that the relator accessed with its proprietary software.
These arguments did not resonate with the district court. The court found that the relator sufficiently described its process for identifying the alleged fraudulent scheme. The district court also found that the websites accessed by the relator’s proprietary software did not constitute “news media” and therefore did not trigger the public disclosure bar. Following the district court’s denial of the defendant’s motion to dismiss, the defendant eventually settled for $20 million this summer.
Takeaways from Invitation Homes
Invitation Homes is instructive as an early example where the relator relied upon AI technology to develop allegations of fraud. Several takeaways can be gleaned from this case.
Whistleblower Use of AI Technology
In Invitation Homes, the relator’s use of AI technology buttressed its allegations that the defendant’s fraud was widespread. Rather than relying solely on a handful of representative examples, the relator alleged that the fraud was widespread and pointed to findings from its proprietary software. Other relators and whistleblowers may similarly start relying on AI analyses to demonstrate the breadth of any alleged fraud. By including allegations referencing AI analyses, whistleblowers may hope to grab the attention of the court (and get past a motion to dismiss) or any government investigators.
AI tools may also be a boon to the professional whistleblower (as was the case in Invitation Homes). Professional whistleblowers typically resort to reviewing websites, public databases, and company filings for hints of fraud. The professional whistleblower may start investing resources to develop more sophisticated AI technology that is focused on identifying potential fraud or otherwise developing whistleblower complaints. And while the federal False Claims Act tends to favor insiders and other “original sources,” which typically cuts against the professional whistleblower, other whistleblower programs are open to all.
District Court Scrutiny of AI Technology
The court in Invitation Homes did not scrutinize the relator’s “proprietary software” that purportedly used “artificial intelligence and machine learning.” Despite the defendant arguing that the relator’s explanation of its data analysis and use of artificial intelligence was lacking, the court appeared satisfied with the relator’s explanation of what its software purportedly did. The court did not require the relator to explain how its software worked. Presumably, this would have been a topic in discovery, but the court did not delve into this or similar issues at the motion to dismiss stage. Nor did it require the relator to amend its complaint to address these issues in more detail.
This approach may reflect how other courts will handle allegations based on purported findings from AI tools. Defendants should consider seeking early discovery of any AI tools or otherwise push a court or investigator to closely scrutinize allegations based on AI. With well-reported instances of AI hallucinations and other errors, AI tools may lead to flawed or biased analyses.
Public Disclosure Bar and AI
The relator in Invitation Homes used AI technology to scour specific websites and publicly available databases. The district court concluded that these websites and databases were not one of the types of enumerated sources outlined in the public disclosure bar. Accordingly, the court held that the CFCA public disclosure bar did not apply. As the court did not closely review the relator’s proprietary software and AI technology, it was unclear if the software implicated other sources. It was also unclear how and whether the software and AI tools were trained on any publicly available information.
Defendants should consider adopting a broad view of the public disclosure bar when it comes to AI technology. Even if a relator tries to cabin use of AI technology to non-public sources, the AI technology may have been trained on, or otherwise engaged with, the types of public sources enumerated in the public disclosure bar.
Compliance Efforts and AI Technology
AI technology was purportedly used by the relator in Invitation Homes to help identify fraud. The use of AI technology to identify potential fraud is not limited to whistleblowers. Companies should consider staying ahead of whistleblowers by implementing their own AI technology to help identify potential fraud. Companies should consider incorporating AI tools into their compliance departments and voluntary disclosure programs. AI tools have the potential to help detect potential red flags or other signs of fraud to make compliance departments more effective.
Looking Ahead
Invitation Homes is at the forefront of cases that involve the purported use of AI by a whistleblower. As more and more individuals and companies adopt AI tools, whistleblowers are likely to become more sophisticated in the use and deployment of AI technology. Companies should be prepared to push back strongly if/when AI shows up in a whistleblower complaint. At the same time, companies should explore how to strategically deploy AI in their compliance departments. In any event, the use of AI technology will likely become routine in whistleblower cases in the years ahead.