Self-modifying code that learns from every attack is no longer science fiction. It’s active in the wild—and traditional defenses are failing.
When Marcus Hutchins, the malware analyst who famously stopped the WannaCry ransomware attack, calls something “a really fun novelty research project,” you might expect it to be harmless.
But the reality of AI-powered polymorphic malware—code that continuously rewrites itself to evade detection—has moved far beyond the laboratory.
Recent research indicates that AI-generated polymorphic malware can create a new, unique version of itself as frequently as every 15 seconds during an attack, a capability that fundamentally breaks the traditional cat-and-mouse game between attackers and defenders.
For the first time in cybersecurity history, the mouse is evolving faster than the cat can run.
The Silent Epidemic
The numbers paint a sobering picture.
According to recent industry analysis, 76% of detected malware now exhibits AI-driven polymorphism, representing what security researchers describe as a quantum leap from earlier variants that used simple obfuscation techniques.
Each infection is genetically unique, rendering signature-based detection—the backbone of traditional antivirus software—almost useless.
This isn’t theoretical. In June 2025, CardinalOps researchers successfully recreated a proof-of-concept called BlackMamba, a keylogger that uses OpenAI to dynamically generate its core payload at runtime.
The malware never writes its malicious code to disk. Instead, it queries an AI model through a legitimate API, generates fresh code in memory, and executes it using Python’s exec() function—all while appearing to antivirus software as a benign program making ordinary network requests.
Each time the file was executed, its hash was different due to the AI’s variation in function names, structure, and base64-encoded payloads, the researchers noted. Traditional signature-based detection methods failed completely.
Industrial-Scale Evolution
What makes this threat particularly insidious is its accessibility. Steve Stone, Senior Vice President of threat discovery and response at SentinelOne, has tracked the evolution firsthand.
“LLM-enabled malware has already moved from proof-of-concept to practice,” Stone explains, pointing to discoveries like MalTerminal—the earliest known GPT4-powered malware capable of generating ransomware or reverse-shell code at runtime.
The tools, he notes, blur the line between code and conversation, allowing malicious logic to be generated dynamically and evade traditional signatures.
The impact on response times has been devastating. AI-powered ransomware has cut median dwell time from 9 days to 5 days, giving security teams barely a business week to detect and contain threats that previous generations of malware would have taken weeks to execute.
The Double-Edged Sword of AI
Perhaps most alarming is how thoroughly the economics have shifted in favor of attackers.
In 2025, 93% of ransomware victims who paid the ransom had their data stolen regardless of payment, and over 83% of organizations that paid a ransom were targeted and successfully attacked a second time.
The AI doesn’t just remember—it learns from every encounter, optimising its approach for the next victim.
The technique exploits a fundamental vulnerability in how we’ve built cybersecurity defenses. For decades, the industry has relied on pattern recognition: collect samples of known malware, extract their signatures, and block anything that matches.
But when the malware can regenerate itself faster than analysts can document it, the entire paradigm collapses.
Security research demonstrates that AI-powered security tools can identify novel malware patterns with up to 300% more accuracy than signature-based systems.
The problem is deployment: while defenders scramble to integrate these new tools, attackers are already weaponising the same technology at scale.
The Reality Check
Yet amid the alarm, some experts urge caution about overstating the threat. Hutchins points out that non-AI polymorphic techniques are predictable, cheap, and reliable, whereas AI-based approaches require local models or third-party API access and can introduce operational risk.
He notes that commodity packers and crypters already generate huge variant volumes cheaply and predictably, questioning whether sophisticated threat actors see value in switching to probabilistic AI generation that might break functionality.
Even if binaries differ, malware must still perform malicious actions like command-and-control communication, privilege escalation, and credential theft, which produce telemetry independent of code structure, Hutchins explains.
Modern endpoint detection and response platforms detect this behavior reliably, regardless of how the code is structured.
The debate isn’t whether AI-powered polymorphic malware exists—it demonstrably does. The question is whether it represents an evolutionary leap or incremental improvement over existing threats.
What Comes Next
The Flashpoint Analyst Team offers a sobering perspective: “AI-generated malware will get headlines, but threat actors don’t need fully autonomous malware when infostealers already automate the hardest part: initial compromise at scale”.
They note that 1.8 billion credentials were stolen by infostealers in the first half of 2025 alone.
For chief information security officers, the message is clear: the compression of time and effort for attackers requires a fundamental shift in defensive strategy.
The advantage shifts to those who can automate, iterate faster, and maintain visibility and control, industry analysts warn.
Preparing for this reality means doubling down on behavioral monitoring, identity security, and response automation.
The arms race between adaptive malware and adaptive defenses has entered a new phase. Unlike previous generations of threats, this one doesn’t just evolve between versions—it evolves during execution, learning from its environment and adjusting tactics in real time.
The question isn’t whether your organisation will face self-modifying AI malware.
With 81% of organisations worldwide encountering at least one malware-related security incident in the past 12 months, the question is whether your defenses can adapt as quickly as the threat itself.

