Alston & Bird Consumer Finance Blog

AI

Shareholders Sharpen Focus on AI-Related Securities Disclosures

What Happened?

As Alston & Bird’s Securities Litigation Group reported, the number of securities class actions based on AI-related allegations is rising.  With six new filings in the first half of 2024 and at least five more identified by the authors since, a new trend of AI lawsuits has emerged. This trajectory is likely to continue alongside increased AI-related research and development spending in the coming years.

Why Is It Important?

A recent proposed rule and several enforcement actions indicate that the Securities and Exchange Commission (“SEC”) has a growing appetite for regulating AI-specific disclosures, and shareholders’ interest in claims. In this environment, it is imperative that companies remain cognizant of their public statements on AI.

Last year, the SEC proposed a rule that would govern AI use by broker dealers and investment advisers. Although the rule is not yet final, the agency has pursued several AI-related enforcement actions with its authority to regulate false or misleading public statements.

Thus far, the SEC’s enforcement actions have been limited to companies whose public statements on AI usage were at issue.  These companies allegedly claimed to use a specific AI model to elevate their customer offerings but could not provide any evidence of their AI implementation when questioned by the SEC.

Those previous actions do not necessarily mean that a company’s ability to prove it implemented AI technology in some form will be enough to avoid scrutiny or liability. Investor plaintiffs targeting companies’ AI disclosures represent a new frontier of potential risk for companies and their directors and officers.

What To Do Now?

Companies should consider whether the board’s audit or risk committees should be tasked with understanding the company’s AI use and considering associated disclosures in addition to any privacy and confidentiality concerns that arise. Companies can identify their AI experts to properly vet any technical proposed disclosures on AI to confirm the disclosures are accurate. The key is to make sure AI disclosures and company claims about AI prospects have a reasonable basis that’s adequately disclosed.

Companies should also aim to create and maintain appropriate risk disclosures. When disclosing material risks related to AI, risk factors become more meaningful when they are tailored to the company and the industry, not merely boilerplate.

CFPB Submits Comment Letter on Use of AI in Financial Services

What Happened?

On August 12, the Consumer Financial Protection Bureau (CFPB) submitted a comment letter in response to a Treasury Department Request for Information on the use of AI in financial services.

Why Is It Important?

Reiterating that “there is no ‘fancy new technology’ carveout to existing consumer financial laws,” the CFPB has emphasized that products and services built with innovative technologies must conform with consumer protection laws and regulations, including the Equal Credit Opportunity Act (ECOA), and Unfair, Deceptive, or Abusive Acts or Practices (UDAAP), in both origination and servicing practices.

The CFPB’s comments underscore the sustained regulatory focus on the use of emerging technologies, and the goal of responsible innovation balanced with consumer protection.  The CFPB has made clear that companies must comply with consumer financial protection laws when adopting emerging technology, stating, “[i]f firms cannot manage using a new technology in a lawful way, then they should not use the technology.”

The comment letter emphasizes the CFPB’s focus on the growing use of emerging and innovative technologies in consumer financial services, including machine learning, “traditional” forms of artificial intelligence, and generative artificial intelligence.  As the CFPB balances support for innovation in the consumer space, it is clear that it has set its sights squarely on how those technologies are used, and what the consumer impact may be.

What To Do Now?

Companies using (or considering using) emerging technologies should have clear governance mechanisms to ensure alignment between business priorities and appropriate risk management practices, including where vendors are engaged to provide innovative technology solutions.  There is no one size fits all model, however, and the use case for the technology will drive the primary risk analysis.  As use of emerging technologies continues to expand, ensuring stakeholder involvement and alignment should be a top priority.

CFPB Continues Scrutiny of Algorithmic Technology

On May 26, 2022 the Consumer Financial Protection Bureau released a Consumer Financial Protection Circular stating that creditors utilizing algorithmic tools in credit making decisions must provide “statements of specific reasons to applicants against whom adverse action is taken” pursuant to ECOA and Regulation B. The CFPB previously stated that circulars are policy statements meant to “provide guidance to other agencies with consumer financial protection responsibilities on how the CFPB intends to enforce federal consumer financial law.” The circular at issue posits that some complex algorithms amount to an uninterpretable “black-box,” that makes it difficult—if not impossible—to accurately identify the specific reasons for denying credit or taking other adverse actions. The CFPB concluded that “[a] creditor cannot justify noncompliance with ECOA and Regulation B’s requirements based on the mere fact that the technology it employs to evaluate applications is too complicated or opaque to understand.”

This most recent circular follows a proposal from the CFPB related to review of AI used in automated valuation models (“AVMs”). As we noted in our previous post on that topic, the CFPB stated that certain algorithmic systems could potentially run afoul of ECOA and implementing regulations (“Regulation B”). In that prior outline of proposals with respect to data input, the CFPB acknowledged that certain machine learning algorithms may often be too “opaque” for auditing. The CFPB further theorized that algorithmic models “can replicate historical patterns of discrimination or introduce new forms of discrimination because of the way a model is designed, implemented, and used.”

Pursuant to Regulation B, a statement of reasons for adverse action taken “must be specific and indicate the principal reason(s) for the adverse action. Statements that the adverse action was based on the creditor’s internal standards or policies or that the applicant, joint applicant, or similar party failed to achieve a qualifying score on the creditor’s credit scoring system are insufficient.” In the circular, the CFPB reiterated that, in utilizing model disclosure forms, “if the reasons listed on the forms are not the factors actually used, a creditor will not satisfy the notice requirement by simply checking the closest identifiable factor listed.” In another related advisory opinion, the CFPB earlier this month also asserted that the provisions of ECOA and Reg B applies not just to applicants for credit, but also to those who have already received credit. This position echoes the Bureau’s previous amicus brief on the same topic filed in John Fralish v. Bank of Am., N.A., nos. 21-2846(L), 21-2999 (7th Cir.). As a result, the CFPB asserts that ECOA requires lenders to provide “adverse action notices” to borrowers with existing credit. For example, the CFPB asserts that ECOA prohibits lenders from lowering the credit limit of certain borrowers’ accounts or subjecting certain borrowers to more aggressive collections practices on a prohibited basis, such as race.

The CFPB’s most recent circular signals a less favorable view of AI technology as compared to previous statements from the Bureau. In a blog post from July of 2020, the CFPB highlighted the benefits to consumers of using AI or machine learning in credit underwriting, noting that it “has the potential to expand credit access by enabling lenders to evaluate the creditworthiness of some of the millions of consumers who are unscorable using traditional underwriting techniques.” The CFPB also acknowledged that uncertainty concerning the existing regulatory framework may slow the adoption of such technology. At the time, the CFPB indicated that ECOA maintained a level of “flexibility” and opined that “a creditor need not describe how or why a disclosed factor adversely affected an application … or, for credit scoring systems, how the factor relates to creditworthiness.” In that prior post, the CFPB concluded that “a creditor may disclose a reason for a denial even if the relationship of that disclosed factor to predicting creditworthiness may be unclear to the applicant. This flexibility may be useful to creditors when issuing adverse action notices based on AI models where the variables and key reasons are known, but which may rely upon non-intuitive relationships.” That post also highlighted the Bureau’s No-Action Letter Policy and Compliance Assistance Sandbox Policy as tools to help provide a safe-harbor for AI development. However, in a recent statement, the CFPB criticized those programs as ineffective and it appears those programs are no longer a priority for the Bureau. So too, that prior blog post now includes a disclaimer that it “conveys an incomplete description of the adverse action notice requirements of ECOA and Regulation B, which apply equally to all credit decisions, regardless of the technology used to make them. ECOA and Regulation B do not permit creditors to use technology for which they cannot provide accurate reasons for adverse actions.” The disclaimer directs readers to the CFPB’s recent circular as providing more information. This latest update makes clear that the CFPB will closely scrutinize the underpinnings of systems utilizing such technology and require detailed explanations for their conclusions.