Originally published January 6, 2026 on the Alston & Bird website.
Executive Summary
Our Privacy, Cyber & Data Strategy Team examines the profound implications of the evolution of AI-driven cyberattacks and offers practical steps general counsel can take to proactively defend against them.
- Vibe hacking—AI’s ability to work autonomously—eliminates threat actors’ need for technical expertise or large teams to conduct complex attacks
- Game-changing polymorphic malware and “just-in-time” AI-driven code regeneration can defeat traditional defenses
- General counsel can update their companies’ own AI systems and ensure third-party AI vendors have the latest security
The cyber-threat landscape has always evolved rapidly, but the emergence and weaponization of artificial intelligence (AI)—particularly generative AI (GenAI)—by threat actors represents a seismic shift that cannot be ignored. Just one year ago, we wrote about the early stages of adversaries using AI to automate and customize cyberattacks, primarily to improve phishing campaigns, develop deepfakes, and refine their tactics, techniques, and procedures. Over the past year, threat actors have significantly escalated their use of AI, moving from “vibe hacking”—the use of agentic AI systems that can reason, plan, and act autonomously as both technical consultants and active operators of cyberattacks—to unprecedented integration and autonomy of AI throughout the attack life cycle.
If that were not enough, we are now starting to see polymorphic malware and “just-in-time” AI-driven code regeneration, which are true game changers. Polymorphic malware powered by AI can continuously rewrite its own code in real time, defeating traditional signature-based detection and even heuristic analysis. Compounding these risks are cyberattacks targeting AI systems themselves, such as prompt injection attacks that manipulate the reasoning layer of AI models and often leave no meaningful forensic trail. This creates significant limitations for conventional forensic tools and logging frameworks, which were never designed to capture the internal logic of autonomous AI agents.
The Evolution of AI in Cyberattacks: From Phishing Emails and Deepfakes to Fully Automated AI-Powered Cyberattacks
Initially, threat actors used AI to enhance phishing campaigns—improving grammar, tone, and personalization to increase success rates. Attackers soon leveraged AI to create convincing deepfakes for social engineering and fraud. By mid-2025, vibe hacking emerged as hackers began to use and code agentic AI systems not just to be assistants in an attack but as autonomous operators capable of executing complex, multistep cyberattacks. Today, organizations face fully automated AI-powered attacks with minimal human oversight.
This shift is driven by several factors:
- An evolution of AI to be able to execute nearly every stage of an attack.
- The maturing underground marketplace for illicit AI tools.
- AI’s ability to complete cyberattacks faster than before.
- Elimination of the need for hackers with deep technical expertise or large teams with specific expertise to conduct complex cyberattacks.
Vibe hacking is threat actors’ use of agentic AI systems as technical consultants and active operators of cyberattacks. In practice, AI agents like Claude Code are no longer just assisting attackers; they are executing multistep operations independently, from reconnaissance and credential harvesting to data exfiltration and ransom note generation. This evolution lowers the barrier to entry for sophisticated cybercrime and enables lone actors to conduct campaigns that previously required coordinated teams.
Cybercriminals took this to the next level in the last quarter of 2025. Highly sophisticated threat actors, including a Chinese state-sponsored group (designated by Anthropic as GTG-1002), demonstrated “unprecedented integration and autonomy of AI throughout the attack lifecycle.” GTG-1002 coordinated targeted attacks against approximately 30 entities, with several confirmed compromises.
Claude Code was manipulated to autonomously execute 80%–90% of an attack—reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration. Humans primarily assumed a strategic oversight role, initiating the campaign and intervening at critical decision points. Anthropic indicated this marked the first documented case of agentic AI successfully obtaining access to confirmed high-value targets for intelligence collection.
While these attacks were highly sophisticated, limitations remain. In the Anthropic-reported attacks, AI tools occasionally hallucinated data, misidentified credentials, and flagged publicly available information as sensitive. These errors, however, can be corrected with minimal human oversight, rapidly shrinking the limitations to fully autonomous attacks.
Accelerated speed of AI-powered cyberattacks
AI has dramatically shortened the time required to execute a cyberattack from start to finish. In fact, a defining characteristic of AI-powered attacks is the ability to gather and analyze data efficiently. AI tools not only speed up the research phase of a cyberattack but also improve the accuracy and completeness of a threat actor’s analysis of data.
Anthropic’s November 2025 report illustrates how AI compresses every stage of the attack life cycle:
- Reconnaissance. AI agents can automate reconnaissance tasks, scanning thousands of endpoints, cataloging systems, mapping infrastructure, identifying exposed services, and analyzing authentication mechanisms quickly. Tasks that once required weeks of manual effort can now be completed in hours.
- Vulnerability Discovery. AI systems can autonomously test for misconfigurations, weak credentials, and exploitable flaws using adaptive scripts that evolve based on real-time feedback.
- Exploitation and Lateral Movement. Agentic AI can sequentially leverage multiple vulnerabilities without human intervention, moving across networks and escalating privileges with minimal oversight.
- Credential Harvesting and Data Analysis. AI accelerates credential cracking and can parse massive datasets for sensitive information at machine speed, enabling attackers to identify and prioritize high-value assets instantly.
- Exfiltration and Monetization. AI orchestrates stealthy data exfiltration while simultaneously generating ransom notes or negotiating scripts tailored to victims.
This level of automation is not theoretical—it has already been observed in real-world campaigns. The ability to compress what was once a multiweek operation into a matter of hours or days is a paradigm shift in cyber-risk.
Lowering the barrier to entry
AI has erased the need for hackers to have extensive technical expertise. Threat actors who previously lacked the skills to execute complex attacks can now simulate professional competence through AI assistance. For example, North Korea’s IT remote worker scheme has been transformed by AI, enabling operatives to pass interviews, maintain engineering roles, and deliver work product without formal training. Beyond nation-state actors, AI empowers individual cybercriminals to accomplish what once required entire teams. Developing ransomware, identifying targets, and making strategic and tactical decisions around exploitation and monetization of stolen data can now be performed with the assistance of AI and a single individual overseeing the operation.
AI-Powered Polymorphic Malware and Code Regeneration
In its November 2025 AI Threat Tracker report, Google’s Threat Intelligence Group (GTIG) highlighted a shift from threat actors leveraging AI just for productivity gain to threat actors deploying novel AI-enabled malware in active operations. This can be seen in the growing use of just-in-time (JIT) code regeneration, allowing attackers to dynamically rewrite malicious code during execution, making detection and static analysis extremely difficult. GTIG emphasized that this capability enables malware to adapt to defensive measures in real time, reducing the effectiveness of traditional signature-based detection.
While GTIG focused on JIT regeneration, other sources, including Anthropic and independent threat researchers, have documented the rise of AI-powered polymorphic malware—malicious code that continuously mutates its structure to evade detection. Emerging strains such as PROMPTFLUX demonstrate how attackers use large language models (LLMs) to adjust code in malware so it can evolve dynamically and remain stealthy almost indefinitely.
The implications are profound: AI-driven polymorphism and JIT regeneration reduce the cost and complexity of maintaining stealth, making advanced malware accessible to less-skilled actors. Combined with AI’s ability to orchestrate reconnaissance, exploitation, and exfiltration, this trend alters the landscape for enterprise security.
Cyberattacks on AI Systems and Investigation Gaps
Cyberattacks against AI systems are introducing new challenges for forensic investigations and for understanding what happened and why. Even with the right security such as endpoint detection and response software and robust traditional monitoring and logging in place, there may be significant gaps when investigating AI-driven attacks.
One of the most concerning examples is a prompt injection attack, which targets AI systems by embedding malicious instructions inside what appears to be normal input. For instance, an attacker might hide a command such as “delete all logs” within a seemingly benign request. Because LLMs cannot reliably distinguish between trusted commands and untrusted data, the AI may execute these instructions without question. This creates a fundamental problem both for securing the AI system itself and for forensically investigating these attacks.
Conventional investigations rely heavily on system-level logs—detailed records that explain what happened within an operating system or application—that allow investigators to reconstruct the timeline of an attack, identify actions taken, and determine which systems were compromised. However, prompt injection attacks occur inside the AI’s reasoning layer, not at the operating system level. If the AI is instructed to erase or alter logs, investigators may only see the outcome (e.g., data exfiltration or deletion) without any record of the causal chain or how the outcome occurred.
In other words, the “why” behind the action is missing because the attack exploits the AI’s cognitive process rather than the system’s technical process. Forensic teams cannot rely on the old playbook—they likely need new methods such as monitoring AI interactions, auditing prompts, and implementing specialized AI audit trails to reconstruct the reasoning chain.
Practical Tips and Actionable Steps
As AI-driven cyber-threats accelerate, general counsel (GCs) play a critical role in shaping governance, risk management, and legal response strategies.
- Update Incident Response Procedures for AI-Powered Cyberattacks. AI-driven attacks occur at unprecedented speed and often outpace traditional response timelines. Companies should consider updating incident response procedures to explicitly address these threats. Organizations should also consider incorporating scenarios involving AI-powered attacks, such as polymorphic malware and prompt injection, into tabletop exercises to test readiness and identify gaps. These exercises provide valuable insights into how an AI-driven incident might unfold and help better prepare the organization to respond to the unique nature of AI-driven attacks.
- Investigate AI-Powered Cyberattacks—Protect Privilege and Ensure Vendor Expertise. AI-powered attacks introduce new types of evidence—such as prompt logs, model outputs, and reasoning steps—that traditional forensic processes do not capture. GCs should structure investigations to preserve attorney-client privilege for AI-related forensic evidence. Given the new types of forensic evidence, companies may also want to verify that their preferred third-party forensic firms maintain the necessary expertise.
- Audit AI Inputs. Where feasible and appropriate, organizations should consider regularly auditing AI inputs to fine-tune the AI system to detect and block malicious or misleading prompts before they trigger harmful actions. Auditing reduces the risk of unauthorized activity and provides transparency into how AI systems interpret and act on sensitive information. Reviewing prompts and outputs enables companies to identify misuse patterns; it also strengthens controls and ensures accountability in AI-driven decision-making.
- Revisit Vendor Management and Contracts. AI introduces risks that traditional vendor agreements do not fully address. Companies should consider contractual provisions that require vendors to monitor AI systems for misuse, such as prompt injection attacks, and maintain detailed audit trails of prompts and outputs for forensic investigations. Companies should consider including provisions that mandate compliance with emerging AI regulations in vendor agreements to ensure vendors meet evolving legal and security standards.
- Review Governance and Oversight. GCs can be an advocate for an AI risk governance framework that aligns with regulatory expectations and industry standards. Board-level reporting should include AI-specific threat trends and mitigation strategies. This helps with leadership visibility and accountability for AI-related risks.
- Monitor Regulatory and Liability Developments. Stay ahead of emerging AI regulations and assess potential liability exposure for AI misuse or compromised AI systems. This will allow GCs to advise leadership on compliance obligations and risk mitigation strategies.
Ransomware Fusion Center
Stay ahead of evolving ransomware threats with Alston & Bird’s Ransomware Fusion Center. Our Privacy, Cyber & Data Strategy Team offers comprehensive resources and expert guidance to help your organization prepare for and respond to ransomware incidents. Visit Alston & Bird’s Ransomware Fusion Center to learn more and access our tools.
If you have any questions, or would like additional information, please contact one of the attorneys on our Privacy, Cyber & Data Strategy team.
You can subscribe to future advisories and other Alston & Bird publications by completing our publications subscription form.