Alston & Bird Consumer Finance Blog

#artificialintelligence

Shareholders Sharpen Focus on AI-Related Securities Disclosures

What Happened?

As Alston & Bird’s Securities Litigation Group reported, the number of securities class actions based on AI-related allegations is rising.  With six new filings in the first half of 2024 and at least five more identified by the authors since, a new trend of AI lawsuits has emerged. This trajectory is likely to continue alongside increased AI-related research and development spending in the coming years.

Why Is It Important?

A recent proposed rule and several enforcement actions indicate that the Securities and Exchange Commission (“SEC”) has a growing appetite for regulating AI-specific disclosures, and shareholders’ interest in claims. In this environment, it is imperative that companies remain cognizant of their public statements on AI.

Last year, the SEC proposed a rule that would govern AI use by broker dealers and investment advisers. Although the rule is not yet final, the agency has pursued several AI-related enforcement actions with its authority to regulate false or misleading public statements.

Thus far, the SEC’s enforcement actions have been limited to companies whose public statements on AI usage were at issue.  These companies allegedly claimed to use a specific AI model to elevate their customer offerings but could not provide any evidence of their AI implementation when questioned by the SEC.

Those previous actions do not necessarily mean that a company’s ability to prove it implemented AI technology in some form will be enough to avoid scrutiny or liability. Investor plaintiffs targeting companies’ AI disclosures represent a new frontier of potential risk for companies and their directors and officers.

What To Do Now?

Companies should consider whether the board’s audit or risk committees should be tasked with understanding the company’s AI use and considering associated disclosures in addition to any privacy and confidentiality concerns that arise. Companies can identify their AI experts to properly vet any technical proposed disclosures on AI to confirm the disclosures are accurate. The key is to make sure AI disclosures and company claims about AI prospects have a reasonable basis that’s adequately disclosed.

Companies should also aim to create and maintain appropriate risk disclosures. When disclosing material risks related to AI, risk factors become more meaningful when they are tailored to the company and the industry, not merely boilerplate.

CFPB Submits Comment Letter on Use of AI in Financial Services

What Happened?

On August 12, the Consumer Financial Protection Bureau (CFPB) submitted a comment letter in response to a Treasury Department Request for Information on the use of AI in financial services.

Why Is It Important?

Reiterating that “there is no ‘fancy new technology’ carveout to existing consumer financial laws,” the CFPB has emphasized that products and services built with innovative technologies must conform with consumer protection laws and regulations, including the Equal Credit Opportunity Act (ECOA), and Unfair, Deceptive, or Abusive Acts or Practices (UDAAP), in both origination and servicing practices.

The CFPB’s comments underscore the sustained regulatory focus on the use of emerging technologies, and the goal of responsible innovation balanced with consumer protection.  The CFPB has made clear that companies must comply with consumer financial protection laws when adopting emerging technology, stating, “[i]f firms cannot manage using a new technology in a lawful way, then they should not use the technology.”

The comment letter emphasizes the CFPB’s focus on the growing use of emerging and innovative technologies in consumer financial services, including machine learning, “traditional” forms of artificial intelligence, and generative artificial intelligence.  As the CFPB balances support for innovation in the consumer space, it is clear that it has set its sights squarely on how those technologies are used, and what the consumer impact may be.

What To Do Now?

Companies using (or considering using) emerging technologies should have clear governance mechanisms to ensure alignment between business priorities and appropriate risk management practices, including where vendors are engaged to provide innovative technology solutions.  There is no one size fits all model, however, and the use case for the technology will drive the primary risk analysis.  As use of emerging technologies continues to expand, ensuring stakeholder involvement and alignment should be a top priority.