Google Flags AI Malware Surge As Hackers Use LLMs To Mutate Code On-The-Fly

Code arranged the shape of a skull.
The industry-wide effort to AI all the things isn't without its seedy side. Namely, we're quickly entering an era of more sophisticated malware strains evading common antivirus protections, with threat actors taking advantage of powerful large language models (LLMs) that pose evolving threats, Google Threat Intelligence Group (GTIG) warns in a new security report.

GTIG says it's seen novel AI-enabled malware in active operations, marking a new phase of AI abuse. What makes the new crop of malware so alarming is its ability to alter its behavior on the fly, with the assistance of LLMs.

"For the first time, GTIG has identified malware families, such as PROMPTFLUX and PROMPTSTEAL, that use LLMs during execution. These tools dynamically generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand, rather than hard-coding them into the malware," GTIG says.

These kinds of threats are new to the landscape and therefore not as common as traditional malware, but according to GTIG, they represent a giant leap towards autonomous malware with the ability to dynamically adapt.

AI-driven malware is particularly worrisome because it can change its code and capabilities mid-execution to alter its behavior. This functionality can make AI-driven malware tricky to detect as it implements "just-in-time" self-modification to evade static antivirus signatures. To use our own imperfect analogy, think of it like a human who's caught on camera committing a crime, who then changes clothes and cuts their hair, making them all the more tougher to catch.

Related to this emerging threat, GTIG points to a maturing underground marketplace for illicit AI tools.

"We have identified multiple offerings of multifunctional tools designed to support phishing, malware development, and vulnerability research, lowering the barrier to entry for less sophisticated actors," GTIG says.

The lengthy report highlights several examples of AI-powered malware, as well as how Google is detecting and disrupting adversary operations. It's a bit self-serving in that context, but no less alarming.