Alston & Bird Consumer Finance Blog

Artificial Intelligence

5 Things to Think About When Using AI

What Happened?

As the Trump Administration’s deregulatory, pro-innovation approach to emerging technology moves forward, the use of artificial intelligence has taken center stage, and it is clear that the Administration views it as a competitive economic force.  The Administration’s emphasis on growth and deregulation was clear from its early days as it revoked the Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

Why is it Important?

In an environment that is supportive of the use of AI and technology innovation, we expect more companies to consider the use of AI tools to realize efficiencies, create and offer new products and services, and grow market share.  As the technology itself continues to grow by leaps and bounds, the opportunities seem limitless; correspondingly, there is a growing market for these tools.

What to do Now?

As the use of AI tools expands, it is important to remember that, while the broader environment is supportive of the use of technology and innovation, fundamental considerations remain relating to consumer protection, risk management, governance of technology tools and third parties that may provide them.  While each company will have its own risk management program tailored to its specific needs, we have highlighted below five important elements that should be considered as part of all programs.  These are enduring principles of risk management that are particularly important in the context of using AI tools.

#1: Use Case Drives Risk Profile

When using AI tools, the how, when, where, and whys matter.  These factors have a direct impact on the legal and regulatory risks that companies should consider, as well as the corresponding compliance obligations and governance mechanisms to put into place.  Not all AI use cases carry the same level of risk, so it’s important to begin each project with a risk assessment that examines the specific use of AI, together with legal and regulatory considerations, including the potential impact to customers, discrimination and bias, and the ability to explain to a customer or a regulator how a decision was made using the AI tool.

#2: Watch Over It: Governance and Oversight

Use of AI tools should be monitored and managed to ensure accountability and oversight throughout the lifecycle of their use.  Clearly defined roles and responsibilities, with ownership and accountability should be established and maintained.  It’s important to bring stakeholders from across the company to the table, including from functional areas like Technology, Operations, Data Management, Privacy, Legal, Compliance, and Risk, and others who should be involved in decision-making, or on an advisory basis.

#3: Maintaining AI Tool Integrity Through Testing:

Whether using a vendor-provided AI solution or a model created by one’s own developers, ongoing monitoring and testing of the AI tool is important to ensure reliability and adherence to performance standards.  Testing should occur throughout the AI tool lifecycle, from pre-deployment, though ongoing use, performance, and maintenance.

#4: Data Governance:  Check the Oil that Fuels Your AI

AI runs on vast amounts of data, and how that data is sourced, collected, stored, managed, used, and ultimately disposed of, has important legal and regulatory ramifications, as well as implications for the quality of the output from any AI tool.  Strong data governance, therefore, is a critical part of AI governance.  Inadequate data management and governances can result in ineffective and harmful AI tools that may introduce bias, and potentially expose companies to legal, regulatory, and reputational risks.

#5 Third-Party Risk: Manage Vendors and Other Third Parties

Many companies use third party providers for AI solutions that can be customized to their needs.  While these arrangements offer convenience and some flexibility, they can introduce risk.  From a regulatory perspective, companies that use third party solutions are viewed as responsible for the outcome of that tool.  Effective oversight of third-party providers, therefore, is critical, and should include transparency with respect to AI tools to be provided, including “explainability” of the tools and the models that are used, as well as audit rights, and other transparency and accountability measures that should be contracted for.

Trump Administration Rescinds Biden Executive Order on Artificial Intelligence

What Happened?

Last week, President Trump signed an Executive Order that rescinded the Biden Administration’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

Titled “Removing Barriers to American Leadership in Artificial Intelligence,” the new Executive Order “revokes certain existing AI policies and directives that act as barriers to American AI innovation, [and will] clear a path for the United States to act decisively to retain global leadership in artificial intelligence.” The Trump Administration’s Executive Order directs executive departments and agencies to develop and submit to the President an action plan designed to meet that objective.

Why does it Matter?

AI is expected to be a focus for the new Administration, and policy likely will focus on AI development and innovation as a matter of economic competitiveness and national security. In December, (then President-elect) Trump named David Sacks, a prominent Silicon Valley venture capitalist, as the White House “AI and Crypto Czar.” When announcing this appointment, President Trump characterized AI as “critical to the future of American competitiveness…David will focus on making American the clear global leader…” We expect the Administration to focus on national security issues that include export control issues where the technology could be used in military applications by non-US governments.

What’s Next?

In contrast to the deregulatory approach at the federal level, a number of states already have passed legislation relating to the use of AI, particularly in the consumer space, including laws relating to data use, consent, and disclosures. Additionally, state Attorneys General, particularly in “blue states,” have expressed concern about the risk of “high-risk” AI that can negatively impact consumers’ access to financial goods and services and employment opportunities. With growing use of AI, we expect more activity at the state level.