It’s hard to believe, but a whole year has passed since I first wrote about AI becoming a potent threat in the hands of future cybercriminals. Well, it’s safe to say that future has arrived, albeit faster than any of us expected.
Because on September 24, 2024, HP Wolf Security reported on one of the first “in the wild” occurrences of an AI-generated malware script.
The discovery of AI-generated malware
Before we cut into that, let’s roll the clock back a bit to June 2024, when HP Sure Click first discovered a seemingly harmless French email attachment posing as an invoice. A basic HTML file that, when opened, prompted a password.
Sounds like the start of a phishing scam, right? Well, this wasn’t your run-of-the-mill lure. This was something more sophisticated.
Rather than the payload being encrypted within an archive, as most payloads commonly are, this one was encrypted directly within the file’s JavaScript itself, using totally error-free AES encryption (which is in and of itself difficult, indicating a higher level of technical proficiency). That’s what made HP’s team decide to dig deeper, and ultimately bring the threat to the surface in their September 2024 report.
No ordinary malware infection chain
After brute-forcing the password, the HP team uncovered a rather interesting infection chain, with a VBScript file deploying AsyncRAT, a remote access trojan that gives an attacker full control over a victim’s computer.
What’s even more perplexing is that the adversary’s code was filled with extensive comments, explaining what each line was meant to do. That’s not a common sighting in malware scripts, as hackers go to painstaking lengths to hide their intentions, making it more difficult to thwart their efforts.
So, was this just some cocky new player showing off? Or was there something more going on in those few lines of code?
Organizations Struggle to Secure AI-Generated and Open Source Code
GenAI-written payloads: the prophecy is true
HP surmised, based on the code’s overall structure, those plainly visible comments and the choices of function and variable names, that this attacker leveraged generative AI to develop the scripts.
Just like we predicted in our blog post one year ago about threat actors capitalizing on using AI.
In case you haven’t read it, let’s quickly recap a few of the predictions as they relate to HP’s find.
- Accelerating malware development with AI? Check.
- AI lowering the barrier for less skilled attackers? You bet.
- Improved social engineering? While the email itself isn’t highlighted as AI-generated, you can bet more advanced social engineering attacks are coming (if not already circulating).
What this all means for enterprise security
AI-generated malware is no longer a distant future problem, and these dangers will surely become more sophisticated—and more prevalent—as time goes on.
Because if 83% of organizations are already using AI to generate code for legitimate purposes, you can bet the bad guys are doing the same thing to further their own nefarious ends.
That’s why, as we head into Cybersecurity Awareness Month, it’s important to keep your team not just brushed up on the basics, but to extend that awareness to education around generative AI. It’s also crucial for your enterprise to maintain secure code signing operations.
That continuous learning and constant vigilance, coupled with the ability to prevent malicious code execution, can help ensure success in a future where AI-generated malware only becomes more prevalent.
HP’s discovery of AI-generated malware is a wake-up call
AI-powered threats aren’t just a concept anymore. Better or for worse, they’re here, now.
The question is: Are you ready for them?
With Venafi, you can be. Learn more about our comprehensive Stop Unauthorized Code solution.
Stop Unauthorized Code Solution
Machine Identity Security Summit 2024
Help us forge a new era of cybersecurity
☕ We're spilling all the machine identiTEA Oct. 1-3, but these insights are too valuable to just toss in the harbor! Browse the agenda and register now.