In part 1 of this 2-part series, I discussed the many ways threat actors are using generative AI.
This time, we’ll take a look at the many ways Infosec teams are joining forces with AI-powered allies, and how machine identity security plays a vital role in this collaboration.
Cybersecurity is the fastest-growing category of AI-related software
This comes as no surprise, given that cyberattacks occur, on average, every 39 seconds. The amount of data that comes from a company barraged with attack attempts is incalculable—and it’s far too much for already burned out human brains to parse, analyze, and interpret.
And given that threat actors’ arsenals are always evolving, risk levels are off the charts. DDoS attacks, cross-scripting, SQL injections, brute-force attacks, social engineering attacks, advancing malware—these are but a few of the threats Infosec teams are continuously hunting for.
But one missed threat vector is all it takes for the adversary to gain access. And with that number of threats rampaging through cyberspace, it’s already hard enough to keep them out.
If the bad guys are using AI, too? Virtually impossible. Unless Infosec teams also capitalize on the AI boom.
The benefits of AI for cybersecurity teams
Artificial intelligence models offer many benefits for cybersecurity teams, including the capability to analyze enormous swaths of data, discern patterns and predict future behavior.
To stop threats before they happen.
Why is this important? Without AI, proactive threat hunting and eradication is extremely difficult. Doing so requires Infosec teams to spend precious time riffling through enormous banks of data to find potential red flags.
Like shuffling through a Mount Everest-sized haystack to find one needle that’s just a couple of inches long.
Traditional threat hunting systems are signature-based, which means they can generate a lot of false positive alerts. These systems also make it easy for a slick, conniving adversary to create a new malware variant that doesn’t exist in a system’s database and slip past defenses, unseen. Finally, the rule-based setups for these traditional systems are inflexible, and they’re difficult to adapt to new threats.
AI models, and machine learning capabilities, however, are adaptable, scalable, and conduct real-time, predictive threat analysis.
Predictive threat analysis and automated response
Taking threats out before they become threats is an idea that’s been explored across pop culture, from 1984 toMinority Report and even Captain America: The Winter Soldier. But we’re not talking Orwellian thought police, pre-cogs, or even hovercrafts equipped with laser-targeted weapons.
I’m referring to predictive threat models in cybersecurity.
They already exist today, and as long as these tools are trained on useful, accurate data, they’ll get even better at analyzing trends, looking for anomalies, and helping to build proactive cybersecurity strategies.
Here are two potential examples:
- Fraud detection: Businesses must verify payment transactions. An AI, trained on a large dataset including both fraudulent and legitimate examples, can over time discern one from the other—and see what items are typical targets, as well as the regions in which most fraudulent attempts originate. Then, for any future transactions, the business can use the AI to verify legitimacy before sending money to an otherwise unauthorized party.
- Suspicious user behaviors: In a similar way, to ensure network access is only given to authorized personnel, security teams can use AI to analyze user behaviors. If red flags such as odd login times, strange login locations, or unauthorized attempts to access admin-level information occur, the AI can flag them and keep the user in question from moving further into systems by quarantining a device, kicking the user off the network, or disabling the account entirely.
The Generative AI Identity Crisis: Emerging AI Threatscapes and Mitigations
The importance of a proactive security strategy
Traditionally, cybersecurity has been about responding to an action that’s already happened. A threat actor gaining access to privileged information or stealing data, for instance. It can be more about cleaning up the mess. Artificial intelligence can help ensure those messes don’t happen in the first place.
And AI is going to become central to proactive security in the coming years, especially in the area of threat detection, prevention, and response. Ordinarily, these actions can take days, or even months, to be discovered. If they’re discovered at all.
Artificial intelligence can detect and respond to issues in real time, making split-second decisions without human error. Of course, the success of any given model is dependent on the data it’s been trained on. (You’ve no doubt heard the adage, “Garbage in, garbage out” when referring to AI tools. Nowhere is it more apparent than cybersecurity).
However, if your model knows what to look for and can garner a proper response, AI can help protect systems without operational disruption. Like the kind of company-wide disruption that can come from a single user clicking on a phishing email.
Using Natural Language Processing (NLP) to detect phishing emails
We’ve all gotten them. A spoofed email that looks like it’s coming from your CEO asking for gift cards. Or a fabricated invoice, that’s a little trickier to discern, from the finance department. Or you’ve even sent out test versions yourself to make sure your company is staying on top of their game.
Phishing emails. They’re a pain. Not to mention a huge risk for businesses.
AI models with Natural Language Processing (NLP) capabilities can be used to dissect the language in emails, searching for patterns and context that may be flagged for malicious intent. These models are trained on large banks of legitimate and phishing emails, and refined to look for the typical characteristics used to snag a user:
- Odd, pushy requests
- Grammar and spelling issues
- Hyperbolic language
- Weird send times
- Mismatched from and reply-to addresses
Once an AI has detected a phishing email, it can disable the links before a user has the chance to mistakenly click on it. The AI can also flag the message for a human to do a deeper dive.
Automated patch management
Most enterprises know less than 75% of all endpoint devices on their network. That’s a quarter of devices that may be unaccounted for. The complexity of the issue only compounds further when you bring remote and hybrid workers into the equation. Patching all their disparate, distributed devices is even more difficult.
AI-powered automation can help mitigate these gaps, helping to quickly prioritize patch deployment, optimize send times to minimize user disruptions, provide visibility and control, and scale a predictive program. This gives more time back to your security team to spend on higher-level strategic initiatives, rather than the mundane, repetitive work involved with patch management.
Reverse engineering malware and writing remediation scripts
Malware, particularly the kinds of adaptive malware written with AI, can be difficult to combat. Artificial intelligence can help security professionals analyze the malicious code and reverse engineer it to find ways to prevent against it in the future.
AI can also help write the preventative scripts needed to fortify those layers of protection. Of course, all code generated by an AI must be verified by an experienced person, because AI code is still often erroneous, and code signing best practices still need to be followed to ensure trust.
Red- and blue-team attack simulations
In the world of offensive hacking and penetration testing, red- and blue-team exercises can help businesses ensure their defenses are up to snuff with today’s threat landscape. But sometimes, the construction of those exercises is limited to only human knowledge and techniques.
AI models can analyze previous attacks, even ones that’ve been overlooked, and security professionals can use them to develop more creative scenarios—or even use the AI as the entire red team itself.
Compiling, writing, and visualizing reports
As if parsing and analyzing zettabytes of information isn’t already enough, Infosec teams often have to interpret what that data means for their organizations. Artificial intelligence is a true game-changer in this regard. Prompted correctly, generative AI models can help Infosec teams compile, write, and even visualize reports and presentations to give to their board, executives and other teammates.
Note: Always ensure AI-generated content is checked by a human. Never enter proprietary, confidential or personally identifiable information into a publicly trained generative AI model.
Network actor authentication
You authenticate your identity in some way, shape, or form, every time you open a web browser—or even when you log onto your machine for work. Usernames, passwords, MFA tokens. They’re all important, but usernames and passwords aren’t always as secure as they should be.
Artificial intelligence can take user authentication one step further, enabling the use of biometrics, voice recognition, behavioral patterns or context to authorize or deny access to a given system. (Authentication also applies to machine identities as we move forward in a world with ever-present AI!)
Will AI be managing all aspects of cybersecurity in the next few years?
It’s possible, according to RSA CEO Rohit Ghai. In a recent Fortune.com article, he says “[Infosec’s] time in the cockpit is inevitably ending.” That doesn’t mean that AI is coming after everyone’s jobs, but it does mean that the human’s role in the situation will change.
Ghai says, “AI and some basic cybersecurity hygiene can handle the majority of incidents. AI can automate… can manage the day-to-day, [and] human cybersecurity professionals with supervise the more impactful decisions.” Humans will also be crucial to managing the AI models used to make high-stakes security decisions, when the time comes.
Machine identity security and the future of AI in cybersecurity
To secure these AI models that may be managing your future security systems, every model, every piece of code, will need a machine identity. To maintain visibility and control over those machine identities, you’ll need a robust management and security platform that flexes and adapts to the rapidly emerging, rapidly evolving AI landscape.
That machine identity security platform will be your prime source of protection. Without it, there can’t be a secure AI future.
Because as AI becomes a foundational piece of enterprise cybersecurity architecture, threat actors will target it and attempt to steal models, poison data and training, damage AI operations, breach safety guardrails (or get the AI to do it and wreak even further havoc).
To combat these emerging threats, your enterprise must be able to:
- Authenticate ALL training inputs and data
- Authenticate and authorize ALL approved models and API connections
- Authenticate and authorize ALL AI-generated code
Machine identity security provides those capabilities. It is, and will continue to be, the kill switch for every AI-related piece of technology (and beyond) in your enterprise.
And, with a little help from our own proprietary AI, Venafi Athena, taking control of your machine identities has never been easier. Get to know her here.
Potential challenges of using AI in cybersecurity
While there are many ways for AI to benefit security teams, it’s still important that you keep experienced human professionals in the mix. AI models aren’t a silver bullet for cybersecurity, and they won’t be for the considerable future. But you can start experimenting and seeing the potential in these tools. Just keep these considerations in mind as you do.
Awareness and education
Security teams the world over are concerned with the potential risks of AI becoming ubiquitous in enterprise systems. Implementing AI-focused education and awareness will go a long way to ensuring your organization stays on the cusp of innovation without making breach headlines.
Parameter training and refinement
Without quality data, you won’t get quality output from an AI model. Always ensure your data is accurate and usable within a given context. It’s also important to be aware of potential biases and discrimination in these tools. They require a lot of data to be successful, but if the data ingested is biased against a particular group (even unconsciously), the entire model can be skewed that way.
Budget and compute
AI is a rapidly growing space, with a market expected to triple to more than 60 billion in the next 5 years. That’s an astounding level of growth, so be prepared to explain use cases that tie to the bottom line in clear, direct ways.
Also be aware of the computational resources required to manage AI models. As mentioned earlier, these models require a lot of data, which smaller businesses may not have. Ensure you’ve considered storage, data backups, and other aspects of the equation before electing to build AI into your security architecture.
The human in the mix is more important than ever
AI is exciting technology, but human insight and intuition are paramount—especially in a world where AI is making high-stakes decisions. Also, when training the model, depending on if it’s public or private, you need to be cognizant of what data is being fed to the model and how both input and output data will be used going forward.
The future of cybersecurity will be AI-powered. Are you ready for it?
Without holistic machine identity security, maintaining control over AI models across your enterprise will be difficult. Visibility and automation are crucial to cultivating innovation without sacrificing security.
Venafi TLS Protect Cloud can ensure that you take charge of every AI model through the application and management of machine identities. And Venafi Athena can further simplify the process.
No matter what questions you have about using Venafi TLS Protect Cloud, she’s your security sidekick, there to help you slay complexity, with everything from practical advice to optimization tips and even integration code.