Alston & Bird Consumer Finance Blog

Privacy and Cybersecurity

Privacy, Cyber & Data Strategy Advisory | How AI Is Changing the Incident Response Landscape: What GCs Need to Know

Originally published January 6, 2026 on the Alston & Bird website.

Executive Summary

Our Privacy, Cyber & Data Strategy Team examines the profound implications of the evolution of AI-driven cyberattacks and offers practical steps general counsel can take to proactively defend against them.

  • Vibe hacking—AI’s ability to work autonomously—eliminates threat actors’ need for technical expertise or large teams to conduct complex attacks
  • Game-changing polymorphic malware and “just-in-time” AI-driven code regeneration can defeat traditional defenses
  • General counsel can update their companies’ own AI systems and ensure third-party AI vendors have the latest security

The cyber-threat landscape has always evolved rapidly, but the emergence and weaponization of artificial intelligence (AI)—particularly generative AI (GenAI)—by threat actors represents a seismic shift that cannot be ignored. Just one year ago, we wrote about the early stages of adversaries using AI to automate and customize cyberattacks, primarily to improve phishing campaigns, develop deepfakes, and refine their tactics, techniques, and procedures. Over the past year, threat actors have significantly escalated their use of AI, moving from “vibe hacking”—the use of agentic AI systems that can reason, plan, and act autonomously as both technical consultants and active operators of cyberattacks—to unprecedented integration and autonomy of AI throughout the attack life cycle.

If that were not enough, we are now starting to see polymorphic malware and “just-in-time” AI-driven code regeneration, which are true game changers. Polymorphic malware powered by AI can continuously rewrite its own code in real time, defeating traditional signature-based detection and even heuristic analysis. Compounding these risks are cyberattacks targeting AI systems themselves, such as prompt injection attacks that manipulate the reasoning layer of AI models and often leave no meaningful forensic trail. This creates significant limitations for conventional forensic tools and logging frameworks, which were never designed to capture the internal logic of autonomous AI agents.

The Evolution of AI in Cyberattacks: From Phishing Emails and Deepfakes to Fully Automated AI-Powered Cyberattacks

Initially, threat actors used AI to enhance phishing campaigns—improving grammar, tone, and personalization to increase success rates. Attackers soon leveraged AI to create convincing deepfakes for social engineering and fraud. By mid-2025, vibe hacking emerged as hackers began to use and code agentic AI systems not just to be assistants in an attack but as autonomous operators capable of executing complex, multistep cyberattacks. Today, organizations face fully automated AI-powered attacks with minimal human oversight.

This shift is driven by several factors:

  • An evolution of AI to be able to execute nearly every stage of an attack.
  • The maturing underground marketplace for illicit AI tools.
  • AI’s ability to complete cyberattacks faster than before.
  • Elimination of the need for hackers with deep technical expertise or large teams with specific expertise to conduct complex cyberattacks.

Vibe hacking is threat actors’ use of agentic AI systems as technical consultants and active operators of cyberattacks. In practice, AI agents like Claude Code are no longer just assisting attackers; they are executing multistep operations independently, from reconnaissance and credential harvesting to data exfiltration and ransom note generation. This evolution lowers the barrier to entry for sophisticated cybercrime and enables lone actors to conduct campaigns that previously required coordinated teams.

Cybercriminals took this to the next level in the last quarter of 2025. Highly sophisticated threat actors, including a Chinese state-sponsored group (designated by Anthropic as GTG-1002), demonstrated “unprecedented integration and autonomy of AI throughout the attack lifecycle.” GTG-1002 coordinated targeted attacks against approximately 30 entities, with several confirmed compromises.

Claude Code was manipulated to autonomously execute 80%–90% of an attack—reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration. Humans primarily assumed a strategic oversight role, initiating the campaign and intervening at critical decision points. Anthropic indicated this marked the first documented case of agentic AI successfully obtaining access to confirmed high-value targets for intelligence collection.

While these attacks were highly sophisticated, limitations remain. In the Anthropic-reported attacks, AI tools occasionally hallucinated data, misidentified credentials, and flagged publicly available information as sensitive. These errors, however, can be corrected with minimal human oversight, rapidly shrinking the limitations to fully autonomous attacks.

Accelerated speed of AI-powered cyberattacks

AI has dramatically shortened the time required to execute a cyberattack from start to finish. In fact, a defining characteristic of AI-powered attacks is the ability to gather and analyze data efficiently. AI tools not only speed up the research phase of a cyberattack but also improve the accuracy and completeness of a threat actor’s analysis of data.

Anthropic’s November 2025 report illustrates how AI compresses every stage of the attack life cycle:

  • Reconnaissance. AI agents can automate reconnaissance tasks, scanning thousands of endpoints, cataloging systems, mapping infrastructure, identifying exposed services, and analyzing authentication mechanisms quickly. Tasks that once required weeks of manual effort can now be completed in hours.
  • Vulnerability Discovery. AI systems can autonomously test for misconfigurations, weak credentials, and exploitable flaws using adaptive scripts that evolve based on real-time feedback.
  • Exploitation and Lateral Movement. Agentic AI can sequentially leverage multiple vulnerabilities without human intervention, moving across networks and escalating privileges with minimal oversight.
  • Credential Harvesting and Data Analysis. AI accelerates credential cracking and can parse massive datasets for sensitive information at machine speed, enabling attackers to identify and prioritize high-value assets instantly.
  • Exfiltration and Monetization. AI orchestrates stealthy data exfiltration while simultaneously generating ransom notes or negotiating scripts tailored to victims.

This level of automation is not theoretical—it has already been observed in real-world campaigns. The ability to compress what was once a multiweek operation into a matter of hours or days is a paradigm shift in cyber-risk.

Lowering the barrier to entry

AI has erased the need for hackers to have extensive technical expertise. Threat actors who previously lacked the skills to execute complex attacks can now simulate professional competence through AI assistance. For example, North Korea’s IT remote worker scheme has been transformed by AI, enabling operatives to pass interviews, maintain engineering roles, and deliver work product without formal training. Beyond nation-state actors, AI empowers individual cybercriminals to accomplish what once required entire teams. Developing ransomware, identifying targets, and making strategic and tactical decisions around exploitation and monetization of stolen data can now be performed with the assistance of AI and a single individual overseeing the operation.

AI-Powered Polymorphic Malware and Code Regeneration

In its November 2025 AI Threat Tracker report, Google’s Threat Intelligence Group (GTIG) highlighted a shift from threat actors leveraging AI just for productivity gain to threat actors deploying novel AI-enabled malware in active operations. This can be seen in the growing use of just-in-time (JIT) code regeneration, allowing attackers to dynamically rewrite malicious code during execution, making detection and static analysis extremely difficult. GTIG emphasized that this capability enables malware to adapt to defensive measures in real time, reducing the effectiveness of traditional signature-based detection.

While GTIG focused on JIT regeneration, other sources, including Anthropic and independent threat researchers, have documented the rise of AI-powered polymorphic malware—malicious code that continuously mutates its structure to evade detection. Emerging strains such as PROMPTFLUX demonstrate how attackers use large language models (LLMs) to adjust code in malware so it can evolve dynamically and remain stealthy almost indefinitely.

The implications are profound: AI-driven polymorphism and JIT regeneration reduce the cost and complexity of maintaining stealth, making advanced malware accessible to less-skilled actors. Combined with AI’s ability to orchestrate reconnaissance, exploitation, and exfiltration, this trend alters the landscape for enterprise security.

Cyberattacks on AI Systems and Investigation Gaps

Cyberattacks against AI systems are introducing new challenges for forensic investigations and for understanding what happened and why. Even with the right security such as endpoint detection and response software and robust traditional monitoring and logging in place, there may be significant gaps when investigating AI-driven attacks.

One of the most concerning examples is a prompt injection attack, which targets AI systems by embedding malicious instructions inside what appears to be normal input. For instance, an attacker might hide a command such as “delete all logs” within a seemingly benign request. Because LLMs cannot reliably distinguish between trusted commands and untrusted data, the AI may execute these instructions without question. This creates a fundamental problem both for securing the AI system itself and for forensically investigating these attacks.

Conventional investigations rely heavily on system-level logs—detailed records that explain what happened within an operating system or application—that allow investigators to reconstruct the timeline of an attack, identify actions taken, and determine which systems were compromised. However, prompt injection attacks occur inside the AI’s reasoning layer, not at the operating system level. If the AI is instructed to erase or alter logs, investigators may only see the outcome (e.g., data exfiltration or deletion) without any record of the causal chain or how the outcome occurred.

In other words, the “why” behind the action is missing because the attack exploits the AI’s cognitive process rather than the system’s technical process. Forensic teams cannot rely on the old playbook—they likely need new methods such as monitoring AI interactions, auditing prompts, and implementing specialized AI audit trails to reconstruct the reasoning chain.

Practical Tips and Actionable Steps

As AI-driven cyber-threats accelerate, general counsel (GCs) play a critical role in shaping governance, risk management, and legal response strategies.

  1. Update Incident Response Procedures for AI-Powered Cyberattacks. AI-driven attacks occur at unprecedented speed and often outpace traditional response timelines. Companies should consider updating incident response procedures to explicitly address these threats. Organizations should also consider incorporating scenarios involving AI-powered attacks, such as polymorphic malware and prompt injection, into tabletop exercises to test readiness and identify gaps. These exercises provide valuable insights into how an AI-driven incident might unfold and help better prepare the organization to respond to the unique nature of AI-driven attacks.
  2. Investigate AI-Powered Cyberattacks—Protect Privilege and Ensure Vendor Expertise. AI-powered attacks introduce new types of evidence—such as prompt logs, model outputs, and reasoning steps—that traditional forensic processes do not capture. GCs should structure investigations to preserve attorney-client privilege for AI-related forensic evidence. Given the new types of forensic evidence, companies may also want to verify that their preferred third-party forensic firms maintain the necessary expertise.
  3. Audit AI Inputs. Where feasible and appropriate, organizations should consider regularly auditing AI inputs to fine-tune the AI system to detect and block malicious or misleading prompts before they trigger harmful actions. Auditing reduces the risk of unauthorized activity and provides transparency into how AI systems interpret and act on sensitive information. Reviewing prompts and outputs enables companies to identify misuse patterns; it also strengthens controls and ensures accountability in AI-driven decision-making.
  4. Revisit Vendor Management and Contracts. AI introduces risks that traditional vendor agreements do not fully address. Companies should consider contractual provisions that require vendors to monitor AI systems for misuse, such as prompt injection attacks, and maintain detailed audit trails of prompts and outputs for forensic investigations. Companies should consider including provisions that mandate compliance with emerging AI regulations in vendor agreements to ensure vendors meet evolving legal and security standards.
  5. Review Governance and Oversight. GCs can be an advocate for an AI risk governance framework that aligns with regulatory expectations and industry standards. Board-level reporting should include AI-specific threat trends and mitigation strategies. This helps with leadership visibility and accountability for AI-related risks.
  6. Monitor Regulatory and Liability Developments. Stay ahead of emerging AI regulations and assess potential liability exposure for AI misuse or compromised AI systems. This will allow GCs to advise leadership on compliance obligations and risk mitigation strategies.

Ransomware Fusion Center

Stay ahead of evolving ransomware threats with Alston & Bird’s Ransomware Fusion Center. Our Privacy, Cyber & Data Strategy Team offers comprehensive resources and expert guidance to help your organization prepare for and respond to ransomware incidents. Visit Alston & Bird’s Ransomware Fusion Center to learn more and access our tools.


If you have any questions, or would like additional information, please contact one of the attorneys on our Privacy, Cyber & Data Strategy team.

You can subscribe to future advisories and other Alston & Bird publications by completing our publications subscription form.


NYDFS Issues Guidance on Managing Risks Related to Third-Party Service Providers

On October 21, 2025, the New York Department of Financial Services (“NYDFS”) published an Industry Letter (the “Letter”) outlining guidance on managing risks related to third-party service providers (“TPSPs”). NYDFS recognizes that as covered entities become more reliant on TPSPs, managing TPSPs “remains a crucial element of a Covered Entity’s cybersecurity program.” The Letter outlines the actions and advice Covered Entities should take while progressing through the lifecycle of a TPSP relationship: (1) Identification, Due Diligence, and Selection; (2) Contracting; (3) Ongoing Monitoring and Oversight; and (4) Termination. While the Letter expressly states it does not impose new requirements or obligations on Covered Entities, but rather, intended to clarify Part 500 (specifically 500.11) and recommend best practices, the prescriptive guidance may in practice be considered the operative benchmark for certain (or many) Covered Entities.

Identification, Due Diligence, and Selection

Due to the increased risks associated with TPSP relationships, Covered Entities may wish to exercise caution and diligence before entering into any arrangement with a TPSP. Accordingly, the Letter outlines a non-exhaustive list of considerations for conducting due diligence on TPSPs. A few of those considerations include:

  • The type and extent of the TPSPs access to information systems and nonpublic information (“NPI”);
  • The TPSPs reputation within the industry, including its cybersecurity history and financial stability;
  • The controls the TPSP has implemented for its own systems and data, particularly if the Covered Entity’s systems are not fully segregated;
  • Whether the TPSP undergoes external audits and independent assessments;
  • The TPSPs practices for selecting, monitoring, and contracting with downstream service providers; and
  • Whether the TPSP, its affiliates, or vendors operate in or from jurisdictions considered high-risk due to geopolitical, legal, socio-economic, operational, or regulatory factors.

In addition to the above considerations, the Letter emphasizes that Covered Entities should consider how to best obtain, review, and validate information provided by prospective TPSPs. Although standardized questionnaires may facilitate the process of gathering the required information from the TPSPs, Covered Entities must ensure that those questionnaires are interpreted by qualified personnel to allow for proper risk-informed decisions to be made. In other words, vendor due diligence questionnaires are not a “check-the-box” requirement; the completed questionnaires must be carefully evaluated by qualified personnel and actioned appropriately. Additionally, if there are limited vendor options, Covered Entities should make risk-informed decisions, document the relevant risks, and take steps to implement compensating controls.

Contracting

Covered Entities that utilize TPSPs are required to develop and implement written policies and procedures that address due diligence and contractual protections. In the Letter, NYDFS provides a few examples of “baseline contract[al] provisions” that Covered Entities should consider incorporating into agreements with a TPSP. Some of the provisions include:

  • Develop and implement policies and procedures addressing access controls;
  • Develop and implement policies and procedures addressing encryption in transit and at rest;
  • Provide immediate or timely notice to the Covered Entity upon occurrence of a Cybersecurity Event directly impacting the Covered Entity’s information;
  • Require TPSPs to disclose where data may be stored, processed, or accessed; and
  • Require TPSPs to disclose the use of subcontractors and allow the Covered Entity to reject the use of certain subcontractors, which essentially allows Covered Entities the ability to control the use of Fourth Parties, somewhat akin to GDPR.

In addition to the above provisions, the Letter reinforces similar guidance that NYDFS provided in 2024, which we previously covered, in regard to inserting provisions in TPSP agreements that relate to the acceptable use of Artificial Intelligence (“AI”) products.

NYDFS clarified that the list provided in the Letter is neither exhaustive nor appropriate in all situations, but Covered Entities should continue to seek “reasonable protections, such as breach notification clauses, data use, and assurances regarding access controls and data handling.” Further, Covered entities should develop medium- to- long-term strategies to reduce its overall dependency on TPSPs.

Ongoing Monitoring and Oversight

A Covered Entity that utilizes a TPSP must have policies in place addressing the periodic assessment of TPSPs based on the risk that each “TPSP presents and the continued adequacy of [the TPSPs] cybersecurity practices.” The assessments conducted by Covered Entities may include obtaining security attestations from the TPSPs (e.g., SOC2, ISO 27001) and requiring penetration testing summaries, policy updates, evidence of security awareness training and proof of compliance audits.

In addition to the periodic assessments, Covered Entities should request updates on a TPSPs vulnerability management, assess patching practices, and confirm remediation of previously identified deficiencies. Although it may be an extensive exercise for Covered Entities, the Letter indicates that Covered Entities should document material or unresolved risks identified and escalate the risks as appropriate.

Termination

When a Covered Entity terminates its relationship with a TPSP, there are actions the Covered Entity should take to mitigate any potential risks from arising. Some of the actions the Letter outlines are prescriptive, including:

  • Revoking identity federation tools, API integrations, and external storage access;
  • Requiring certification of destruction of NPI, secure return of data, or migration of data to another TPSP or internal environment;
  • Confirming that any remaining snapshots, backups, or cached datasets are deleted and access to any shared resources is revoked;
  • Giving special attention to residual or unmonitored access points that fall outside routine access provisioning systems; and
  • Engaging keys take holders, including IT, legal, compliance, procurement, and business units to identify strategies to mitigate potential risks when planning to terminate.

Notably, in addition to the above actions a Covered Entity should take during the termination period, Covered Entities should ensure the offboarding process is properly documented and all relevant audit logs are retained to support accountability and future verifications.

NYDFS made it clear that it will continue to “consider the absence of appropriate TPSP risk management practices by Covered Entities in its examinations, investigations, and enforcement actions.” As such, although the Letter does not formally impose any new requirements for Covered Entities, Covered Entities are strongly encouraged to review the Letter and implement the practices identified by NYDFS to strengthen their cybersecurity posture.


This post was originally published on Alston & Bird’s Privacy, Cyber & Data Strategy Blog on October 27, 2025.


Privacy, Cyber & Data Strategy / White Collar, Government & Internal Investigations Advisory | GENIUS Act Establishes Federal Regulatory Oversight of Global Stablecoin Industry

Executive Summary
8 Minute Read

Our Privacy, Cyber & Data Strategy and White Collar, Government & Internal Investigations Teams examine how the GENIUS Act’s framework for stablecoin issuers will impact the cryptocurrency sector.

  • The Act restricts the issuance of payment stablecoins within the United States to “permitted payment stablecoin issuers” (PPSIs)
  • PPSIs must maintain reserves of high-quality, liquid assets that fully back their outstanding stablecoins on at least a one-to-one basis
  • Regulatory oversight is divided between federal and state authorities, with joint oversight applying when state issuers exceed certain thresholds or opt into federal frameworks

___________________________________________________

On July 17, 2025, during “Crypto Week,” the U.S. House of Representatives passed the landmark Guiding and Establishing National Innovation for U.S. Stablecoins Act (GENIUS Act). Signed into law by President Donald Trump the next day, the GENIUS Act establishes a comprehensive federal framework for the issuance of payment stablecoins, regulation of stablecoin issuers, and both federal and state oversight for stablecoin authorization, audits, and other obligations. Domestic and foreign issuers in the more than $250 billion stablecoin market now have a clear path to securing and maintaining regulatory compliance in the United States.

Demonstrating rare cross-aisle cooperation and a shared interest in modernizing financial regulations to match emerging blockchain and artificial intelligence (AI) technologies, the Act garnered 308 affirmative votes in the House and 68 in the Senate, surpassing the upper chamber’s filibuster threshold. The GENIUS Act addresses Trump’s key campaign and policy promise to bring clarity and control to the digital asset market.

Key Provisions of the GENIUS Act

Effective date

The GENIUS Act takes effect on the earlier of (1) January 18, 2027 (18 months after the date the Act is enacted into law); or (2) 120 days after the primary federal regulators responsible for stablecoins issue their final regulations to implement the Act.

Authorized issuance of stablecoins only

The Act restricts the issuance of payment stablecoins within the United States to only those entities that qualify as “permitted payment stablecoin issuers” (PPSIs). PPSIs must be either U.S.-based issuers authorized under the Act or foreign issuers that are registered and operate under a regulatory framework deemed comparable to the Act by U.S. authorities and are subject to supervision by the Office of the Comptroller of the Currency (OCC).

A domestic PPSI must meet the requirements of one of three main categories: (1) subsidiary of an insured depository institution that has received approval to issue payment stablecoins under Section 5 of the Act; (2) federal qualified payment stablecoin issuers, which encompass nonbank entities (excluding state-qualified issuers) approved by the OCC, uninsured national banks chartered and approved by the OCC, or a foreign bank that does business outside the United States and has opened one or more federally licensed branches or offices in a U.S. state (“federal branch”), approved by the OCC; or (3) state-qualified payment stablecoin issuers, which are entities legally established under state law and approved by a state payment stablecoin regulator, provided they are not an uninsured national bank, federal branch, insured depository institution, or subsidiary of any such entities.

Requirements for issuing stablecoins

PPSIs must maintain reserves that fully back their outstanding stablecoins on at least a one-to-one basis. These reserves must consist of high-quality, liquid assets such as U.S. coins and currency or credit with a Federal Reserve Bank, demand deposits at insured depository institutions, short-term U.S. Treasury securities, and other monetary securities described in Section 4(a)(1) of the GENIUS Act. Any PPSI must publicly disclose its redemption policies and publish monthly reports detailing the composition, average maturity, and custody location of its reserves. A PPSI’s CEO and CFO must certify the accuracy of those monthly reports, and the Act makes knowingly false certifications punishable by up to 10 or 20 years’ imprisonment under 18 U.S.C. § 1350. To ensure reserve quality and transparency, PPSIs are prohibited from pledging, rehypothecating, or reusing reserves except under limited conditions, such as meeting margin obligations for investments in permitted reserves or creating liquidity to redeem payment stablecoins.

Mitigating money laundering and illicit financing risk

The GENIUS Act designates permitted payment stablecoin issuers as “financial institutions” under the Bank Secrecy Act (BSA), requiring them to implement robust compliance programs to prevent money laundering, terrorist financing, sanctions evasion, and other illicit activity. PPSIs must annually certify that they have implemented an effective BSA/AML compliance program. False certifications are punishable by up to five years’ imprisonment. To ensure regulatory parity, the Act’s registration and inspection requirements for foreign issuers effectively subjects them to similar compliance standards when accessing the U.S. market. Issuers must also be technologically capable of assisting with asset freezes, seizures, and turnovers pursuant to lawful orders. The Act further strengthens enforcement by requiring both U.S. and foreign issuers to (1) maintain the technical ability to comply with such orders; and (2) comply with them. Foreign issuers that fail to do so may be designated “noncompliant” by the Treasury, triggering a ban on secondary trading of their stablecoins after 30 days. Violations of that ban carry steep penalties—up to $100,000 per day for digital asset service providers and $1 million per day for foreign issuers.

Regulatory oversight

Regulatory oversight is divided between federal and state authorities, with federal regulators overseeing federally chartered or bank-affiliated issuers, state regulators supervising state-chartered issuers, and joint oversight applying when state issuers exceed certain thresholds or opt into federal frameworks. Regulators are responsible for licensing, examining, and supervising PPSIs to ensure compliance with the Act’s requirements, including reserve backing, redemption policies, and risk management standards.

PPSIs with more than $50 billion in consolidated total outstanding issuance that are not subject to the reporting requirements of the Securities Exchange Act of 1934 are required to prepare an annual financial statement in accordance with generally accepted accounting principles (GAAP) and must disclose any “related party transactions,” as defined under GAAP. A registered public accounting firm must audit the annual financial statement, and the audit must comply with all applicable standards set by the Public Company Accounting Oversight Board. These audited financial statements must also be made publicly available on the PPSI’s website and submitted annually to the PPSI’s primary federal payment stablecoin regulator.

Civil and criminal penalties

Additional civil and criminal penalties are set out throughout the Act. Notably, entities other than PPSIs that issue payment stablecoins in the United States without proper approval may face civil penalties of up to $100,000 per day for violations. Individuals who knowingly issue stablecoins in the United States without being a permitted payment stablecoin issuer face up to five years’ imprisonment and fines up to $1 million for each violation. Additionally, individuals with certain felony convictions are prohibited from serving as officers or directors of a PPSI, and violations of that prohibition can result in imprisonment for up to five years. The Act expressly gives regulators discretion to refer violations of the Act to the Attorney General.

Modernizing anti-money laundering and financial crimes compliance

The GENIUS Act places a strong emphasis on leveraging blockchain technology and AI to modernize the detection of illicit financial activity involving digital assets. The Act mandates that the Secretary of the Treasury initiate a public comment period to gather insights on how regulated financial institutions are using or could use innovative tools—particularly blockchain and AI—to detect money laundering and related crimes. Blockchain technology is highlighted for its potential in transaction monitoring and transparency, especially in tracking digital asset flows and identifying suspicious patterns.

Rulemaking timeline

The Act mandates that all primary federal payment stablecoin regulators, the Secretary of the Treasury, and state payment stablecoin regulators must promulgate regulations to implement the Act within one year of its enactment (July 18, 2026). These regulations must be issued through a notice-and-comment process. Additionally, within 180 days of the Act’s effective date, the OCC, Federal Deposit Insurance Corporation, and Board of Governors of the Federal Reserve System shall submit a report to the Senate Committee on Banking, Housing, and Urban Affairs and the House Committee on Financial Services that confirms and describes the regulations necessary to carry out this Act.

Other Impending Crypto Legislation

The GENIUS Act is momentous for stablecoin issuers, but it does not resolve a number of crypto-native issues, which are the subject of a broader market structure bill known as the Digital Asset Market Clarity Act of 2025 (CLARITY Act). The CLARITY Act passed the House with broad bipartisan support, and a version is currently under Senate consideration. While the GENIUS Act focused narrowly on regulating stablecoin issuers, the CLARITY Act seeks to establish a robust regulatory framework for all digital assets and define the roles of the Securities and Exchange Commission and Commodity Futures Trading Commission in policing the digital asset markets. Most notably, for the first time, the CLARITY Act attempts to classify digital assets based on their characteristics, such as decentralization and blockchain maturity, with a goal of reducing regulatory uncertainty and fostering innovation in the cryptocurrency industry. Senator Tim Scott (R-SC), chair of the Senate Banking Committee, has made several public statements on the timeline for consideration of the CLARITY Act, with committee markup expected in September and full Senate action possible by late fall.

Conclusion

The GENIUS Act establishes a robust framework for the issuance and oversight of payment stablecoins in the United States. It sets clear standards to ensure transparency for the backing of permitted payment stablecoins, and it requires issuers, like traditional financial institutions, to quickly establish robust compliance programs to combat illicit uses of their stablecoins. With its strong bipartisan backing and goals of financial stability, consumer protection, and global competitiveness, the Act could lay the groundwork for a more transparent and trustworthy digital asset ecosystem.

Ransomware Fusion Center

Stay ahead of evolving ransomware threats with Alston & Bird’s Ransomware Fusion Center. Our Privacy, Cyber & Data Strategy Team offers comprehensive resources and expert guidance to help your organization prepare for and respond to ransomware incidents. Visit Alston & Bird’s Ransomware Fusion Center to learn more and access our tools.


Originally published July 24, 2025.

If you have any questions, or would like additional information, please contact one of the attorneys on our Privacy, Cyber & Data Strategy team.

You can subscribe to future advisories and other Alston & Bird publications by completing our publications subscription form.

Wave Goodbye to the Waiver Debate: Court Holds Data Breach Investigation Report Not Work Product from the Start

Litigants in data breach class actions often fight over whether a data breach investigation report prepared in response to the breach is protected by the work-product doctrine. Common areas of dispute include whether the report was prepared in whole or in part for business—not legal—purposes, and whether the report relays facts that are not discernable from other sources. The fight becomes even more complicated, however, when the company that suffered the data breach is required to provide the report to regulators.

For example, in the mortgage industry, mortgagees regulated by the Multistate Mortgage Committee (MMC) are required to provide a “root cause report” following a data breach. Similarly, under Mortgagee Letter 2024-10, FHA-approved mortgagees must notify HUD of a cybersecurity incident and provide the cause of the incident. These reporting obligations involve production of information to regulators that typically overlaps with the content of data breach investigation reports.

Traditionally, one might think that disclosure of an investigation report (or its contents) to a regulator was a question of waiver. But recently, a federal district court in the Southern District of Florida bypassed the waiver analysis entirely by holding that reports provided to regulators weren’t protected by the work-product doctrine because they were primarily created for regulatory compliance rather than in anticipation litigation, even though, factually, they weren’t originally created for the purpose of regulatory compliance.

What Happened?

In a recent decision in a data breach litigation against a national mortgage loan servicer, the court considered whether investigative reports prepared by cybersecurity firms were protected under the work-product doctrine. These reports were initially withheld from discovery on the familiar grounds that they were prepared in anticipation of litigation following a data breach. But the plaintiffs argued that because the reports were disclosed to mortgage industry regulators, any work-product protections were waived.

Rather than address the waiver issue, the court analyzed whether the documents were privileged in the first place under the dual-purpose doctrine, which assesses whether a document was prepared in anticipation of litigation or for other business purposes. Under this doctrine (adopted by the First, Second, Third, Fourth, Sixth, Seventh, Eighth, Ninth, and D.C. Circuits), a document is protected if it was created “because of” the anticipated litigation, even if it also serves an ordinary business purpose. Notably, the court found that the reports were primarily created to comply with regulatory obligations, specifically those imposed by the MMC, even though they’d initially been prepared in anticipation of litigation. In the court’s view, the unredacted submission of the reports to the MMC, when demanded, evidenced that the predominant purpose for their creation was regulatory compliance.

The court ended with the suggestion that the defendants could have avoided this issue by creating a separate document for regulatory compliance, omitting sensitive findings related to litigation. Aside from this suggestion, there does not appear to be a legal framework under the which the disclosed reports would have been protected work product, at least in the court’s view.

Why Does it Matter?

The district court’s decision creates a new challenge for breach victims seeking to protect investigation reports from disclosure under the work-product doctrine. A key purpose of the doctrine is to allow parties to engage in pre-litigation investigations without the fear of disclosure. Data breach victims dealing with regulators have historically had to manage the risk that disclosing investigation reports (in whole or in part) to regulators could result in litigation over whether work-product protections were waived. But the decision appears to raise the stakes. The risk of disclosure is not limited to a waiver analysis, where parties can defend the disclosure based on the circumstances of the compelled disclosure and can rely on law requiring the narrow construction of privilege waivers. Now, parties must also consider whether using a report for a non-litigation purpose after the fact will lead to the conclusion that the report wasn’t prepared for litigation at all and therefore not privileged in the first place.

What Do I Need to Do?

Because this decision is by a federal district court, this is an area that should be monitored to determine whether a trend develops around the court’s rationale. And in the interim, the best option seems to be to follow the court’s suggestion: create separate documents for regulatory compliance and litigation purposes.

It is, of course, important to maintain a good relationship with regulators to try to circumvent these issues, but the two-report approach is a practical way to preempt the issue entirely. The reality is that many litigation-related items do not need to be submitted in a regulatory report. For example, an emerging issue in the cybersecurity space is whether following a data breach, the company that suffered the breach should bring claims against other related parties. Analyzing the merits of this type of litigation is plainly covered by the work-product doctrine but is not needed for regulatory reports. Thus, by following the two-report approach, sensitive findings related to that potential litigation can be omitted from the regulatory report, preserving the work-product protection for the litigation-related document. This approach could help companies navigate the complexities of dual-purpose documents and maintain the intended protections of the work-product doctrine.

California Attorney General Targets Location Data in New Investigative Sweep

This week California Attorney General Rob Bonta announced a new investigative sweep under the California Consumer Privacy Act (CCPA). We have anticipated this sweep for some time based on the focus and the direction of a number of inquiries, investigations, and enforcement proceedings initiated by Attorney General Bonta’s office over the past 12-24 months.

The Notices of Violation issued by the Attorney General’s office will give rise to meaningful risks for many of the receiving businesses. We anticipate the Attorney General’s team will focus on granular technical details of data collection via mobile apps including through the third-party SDKs[1] that are ubiquitous across digital mobile products. How these and other digital analytics tools collect and transfer data, including precise location data, is often not well understood even by the internal digital marketing, data analytics, and product development teams that deploy and use the tools. This blind spot has created a zone of risk for many businesses that would not consider themselves a part of the “location data industry” referenced in the Attorney General’s announcement.

The interactions with the Attorney General’s office in these investigations and in enforcement proceedings can also change focus when the Attorney General’s staff suspects compliance gaps in other sensitive areas, such as use of mobile apps by children or in connection with healthcare or other sensitive activities. Careful and detailed internal legal/technical data flow analyses are therefore critical to quickly identifying the full scope of potential risk and framing the strategy for engaging with the Attorney General. For those businesses that have not received notices, this is another opportunity to close the gap between digital advertising, data analytics, and mobile app development and these emerging and increasingly clear legal privacy standards relating to precise location data and use of third-party SDKs in mobile apps.

Alston & Bird’s Privacy, Cyber & Data Strategy Team has extensive experience advising and defending clients who receive inquiries and violation notices from California’s privacy regulators.  We will continue to monitor developments in privacy regulatory enforcement in California and other states.

[1] “SDK” refers to a software development kit. These tools, many of which are free, are commonly used by mobile app teams to shorten app development timelines and quickly add features and functions to mobile apps.

_______________________________
Originally published March 12, 2025 on Alston & Bird’s Privacy, Cyber & Data Strategy Blog.