It seems everywhere you look, there’s another update about generative artificial intelligence. No surprise there. And the party’s only just started.
After all, the list of possibilities for using AI to improve personal and business life is already immense, and many people are taking advantage of automating various processes, developing content and even writing code. (Or writing Celtic poems about TLS certificates… okay, fine. Guilty as charged.)
However, as with anything in the technology world, threat actors are already putting AI to use for their own immoral gains. In fact, according to a recent study conducted by Sapio Research, 75% of security professionals reported an increase in cyberattacks in the past year—with 85% of those individuals attributing the rise to “bad actors using generative AI.”
How are threat actors leveraging artificial intelligence?
Hackers and threat actors already have several tactics in their arsenal for carrying out cyberattacks, and AI can help them improve their sinister operations in eight key ways.
- AI-assisted reconnaissance
- Hyper-targeted spear-phishing attacks
- Using generative AI to write malicious code
- Reverse engineering existing programs
- Injecting incorrect or harmful content into otherwise legitimate models
- Bypassing CAPTCHA tools, reverse psychology and jailbreaking
A quick caveat before we dive into them:
AI models are evolving rapidly, and as these tools continue to improve, defenses will improve, but so will adversarial tactics, techniques and procedures (TTPs). Be sure to check back often for the latest on AI, cybersecurity, and machine identity security.
The Generative AI Identity Crisis: Emerging AI Threatscapes and Mitigations
AI-assisted reconnaissance
Much of the process of researching potential cyberattack targets currently involves manual tactics, but AI changes that, automating the process in much the same way you might automate parsing large spreadsheets of data or summarizing dense research reports into easy-to-digest bullets.
With AIs that are connected to the web, complexity increases further due to the availability of up-to-date information. Hackers can easily research companies, employee directories, investor information and executive social media profiles, to name just a few. Pretty much anything on the public web is fair game.
AI can help compile all that information in moments, rather than a threat actor having to purchase it or scrape it together all on their own, giving them more time and energy to develop more potent malware or carry out further attacks.
One such recon tool, the Social Network Automated Phishing and Recon tool, or SNAP R for short, was presented at BlackHat 2023. It’s used to analyze past Twitter (now X) activity in order to generate truly tailored tweets with shortened malicious links—ones that users are much more likely to click. Doing that for just one target is alarming enough, but just imagine how easy it is to now write tens of thousands of malicious tweets in mere minutes
Hyper-targeted spear-phishing attacks
Conventional phishing attacks aren’t the most efficient tactic for a threat actor, but they still may hook an unsuspecting victim or two. However, AI gives hackers the potential to move beyond the traditional spray and pray approach.
With AI, threat actors can quickly generate personal, tailored email text, and, given generative AI’s conversational abilities, hackers can make it difficult for individuals to discern legitimate emails from malicious ones.
In addition, many hackers are using AI to take their spear phishing attacks even further.
Deepfakes
Deepfakes allow anyone to take on the voice and visage of another person, and today’s AI technology only requires a few seconds of video footage to create a convincing likeness. This makes multi-step verification of requests crucial. For example, a hacker could pose as an executive across video, email and over the phone, and convince someone in the accounting department to issue a check or send money.
Fabricated photos and footage
Hacktivists could use entirely fabricated photos of political figures or celebrities to sow misinformation and general discord.
Falsified social media profiles
Threat actors can use AI to create entirely fake social media profiles to connect with real users and expand their network to find more victims.
There are already several AI tools out there built for these specific purposes, including WormGPT, FraudGPT, DarkBERT and DarkBART. Just as pre-built ransomware-as-a-service kits simplify a threat actor’s workload, AI now acts as a seedy sidekick.
And since 88% of cybersecurity breaches are caused by human error, cybersecurity awareness training that includes information about AI-assisted phishing threats, remains a crucial element of your organization’s security strategy.
Using generative AI to write malicious code
Developing malware to exploit vulnerabilities takes time, effort, and expertise. Given the aptitude that generative AIs have for writing code, it’s no surprise that threat actors are using them to do exactly that.
Of course, as with most content a generative AI puts out, the first draft of that malicious code will likely be quite rough around the edges. But it’s a start—and it still lowers the gateway to entry for new threat actors—and makes experienced hackers more efficient.
Even more so if the threat actor knows just what they’re looking for—or the exact vulnerability they’d like to exploit.
Perhaps more alarming still is that generative AI can be used to more easily develop polymorphic malware, which changes and adapts depending on a given situation. This will also continue to complicate matters for enterprise antivirus systems.
Regardless of what code the AI is generating for these threat actors, it’s critical for enterprises to regularly update their systems as a defense against malware. It’s also important to implement secure code signing operations, so you can prevent unauthorized code or AI-related APIs from being run on your systems in the first place.
Reverse engineering existing programs
AI can be used in a few different ways relative to reverse engineering. On the one hand, threat actors can use machine learning to analyze code patterns and obscure intent, so defenders can’t understand or dissect the code post-breach. AI is especially helpful in finding the best, most optimized ways to do this.
In addition to muddying up the intent behind malicious code, AI can also analyze legitimate programs to learn about patterns and relationships between code bases and extract meaningful information—even extrapolate the original source code, so threat actors can work out the best ways to exploit it in future attacks.
Injecting incorrect or harmful content into otherwise legitimate models
Generative AIs are built on large language models (LLMs), which are probabilistic algorithms capable of natural language processing.
Threat actors can use LLMs to their advantage by gaining access and injecting poisoned data. This information could be biased, damaging or even dangerous, posing a major risk to the ongoing reliability of the LLM in question. It could also lead to users relying on incorrect or harmful information, which could have serious implications itself.
On the flip side of this, it’s important to remember to be conscious of what your users are entering into the AI system. If they are using proprietary information or other sensitive data, there’s the possibility that it could surface within another user’s conversation.
Bypassing CAPTCHA tools, reverse psychology and jailbreaking
Bypassing CAPTCHAs
CAPTCHAs are annoying, but they’re still used across millions of websites, and ChatGPT has reportedly claimed it can “solve CAPTCHA puzzles.” AI makes sites that are reliant on anti-bot technology much more susceptible, and we can only expect this to continue—because puzzle-solving bots aren’t expensive or hard to come by.
Reverse Psychology
Generative AIs like ChatGPT have significant guardrails in place to protect users from certain types of information. However, threat actors have found that using reverse psychology on the AI works. By simply building your prompt in such a way, the AI may still provide the answer.
Jailbreaking Chatbots
Some threat actors have taken to jailbreaking (or modifying to remove manufacturer restrictions) bots like ChatGPT to get around the preset restrictions and ethical guidelines put in place by OpenAI. The same is happening with the ChatGPT API. For the threat actors that aren’t going that far, they may simply steal ChatGPT user accounts to sell on the Dark Web.
The implications of threat actors using AI: ongoing stress and burnout in Infosec
As you can see, there are a lot of ways threat actors are using AI, and Infosec teams aren’t just concerned—they’re straight-up exhausted. In fact, given all the new threats and stressors, the same Deep Instinct survey showed that 51% of security professionals said they’re likely to leave their job in the next year.
As AI continues to advance, we can expect it to further exacerbate the issue of ongoing stress and burnout across the security sector—but it’s not all doom and gloom!
Security teams can also take advantage of these AI tools in several ways: real-time protection and alerts, monitoring, analysis, and to build an increased understanding of the evolving threatscape.
So is AI a powerful ally—or a potent threat?
Well, it depends on who you ask. Kevin Bocek, Venafi VP of Security Strategy & Threat Intelligence, believes that AI can and will have a profound impact on society as we know it, and he says that it’s becoming increasingly critical to maintain control over generative AI.
In fact, he recommends having a “kill switch” for your AI technology, especially as technological advances veer toward artificial general intelligence (AGI), which is AI that replicates human functions such as reasoning, planning and problem-solving.
Is a world where humans and machines live together in harmony actually possible?
In a world where the number of machines far exceeds the number of humans, you might feel at a loss when trying to answer this question.
But at Venafi, we fully believe it’s possible. Our vision is a world in which machines are trusted and secured against attacks—and our data is protected. That all starts at the foundation, with your machine identities, and ensuring every single one is authentic.
As a result, your business gets to take full advantage of next-gen tech like cloud native, AI and quantum computing, while still knowing that every machine in your enterprise is protected. That every machine identity is verified—and trusted. And that your data is secure.