In a major milestone reported this week, Google’s Project Zero and DeepMind teams joined forces and used an AI agent to discover a zero-day vulnerability in real-world code, mitigating the threat before it could impact users.
The achievement is, according to Project Zero, a first-of-its kind made possible by Big Sleep, an advanced AI agent powered by large language models (LLMs) and adept at seeking and finding security vulnerabilities in widely used code. It is an evolution of Project Naptime, an LLM model-assisted security vulnerability research framework.
How did it all play out? Keep reading, because this development is anything but a snooze fest!
Hang on, what’s a zero-day vulnerability?
Zero-day vulnerabilities are security flaws that remain unknown to those who need to fix them, like software vendors.
When malicious actors discover these vulnerabilities, they can exploit them with devastating effects. One example in recent years includes Log4j, which you can learn more about here.
What did Big Sleep find?
Thanks to the collaboration between Google’s powerhouse teams, Big Sleep discovered an “exploitable stack buffer underflow” in SQLite, an open source database engine.
This type of vulnerability impacts memory safety and can cause errors or security issues. It lets the program running reach parts of memory it shouldn’t be able to access, potentially enabling attackers to read sensitive material, corrupt data or even execute malicious code.
Fortunately, Big Sleep uncovered the vulnerability and Project Zero alerted the SQLite development team, who patched the problem on the same day—before it had the chance to appear in an official update and affect users.
Exciting, albeit deeply experimental results
Big Sleep’s discovery of the vulnerability has, in Project Zero’s words, “tremendous defensive potential.” The ability to find vulnerabilities in software before it’s released means attackers don’t have room to compete; they simply have no way to exploit the vulnerability if there isn’t one to begin with.
However, it’s important to note that Big Sleep is a very targeted, hyper-specific “fuzzing” tool at present. Google can leverage it to identify errors in code, but that doesn’t always surface a vulnerability. Some are harder to find and require different approaches.
Still, regardless of the experimental nature of the discovery, it’s a major step forward, and Big Sleep’s team envisions an even brighter future where AI also assists root-cause analysis, triage and resolution of issues.
Organizations Struggle to Secure AI-Generated and Open Source Code
Using AI to find vulnerabilities is one thing, but challenges persist
This emerging research from Google is a major advancement in using AI in cybersecurity, but the scenario also highlights a need for unauthorized code prevention across enterprise operating environments like Microsoft, Apple, Android, Kubernetes and Linux.
How does this relate to the aforementioned zero-day? Let me explain. One of the potential outcomes of a bad actor’s zero-day exploitation is malicious code execution, which can have large-scale downstream impacts for users.
But with a robust code signing trust chain, teams can ensure that unauthorized or malicious code isn’t allowed to run—at all.
Establishing trust for all the code in your enterprise
Now more than ever, it’s critical to sign code to ensure its authenticity. And 92% of InfoSec leaders agree.
But modern enterprises must go beyond just code signing. Our experts recommend you secure all fundamental aspects of the code signing trust chain by following these five steps:
- Ensure code authenticity and integrity: Verify that all software is from an approved source and hasn’t been tampered with.
- Prevent private key theft and misuse: Store all private keys in a secure location, such as an HSM.
- Maintain code signing visibility across the enterprise: Monitor all activities and establish compliance and audit reporting capabilities.
- Define and enforce code signing policies: Control policy definitions, including request approvals, certificate access and permitted tools.
- Streamline developer code signing processes: Automate services to remove the hassle and overhead of developers managing and requesting code signing certificates.
Teams should also regularly evaluate their third-party infrastructure to prevent unauthorized code execution across the enterprise.
The potential of AI is limitless, but a security-first approach is crucial for enterprise success
As Google’s recent development shows, AI holds a lot of promise for security teams. But there are still many unknown risks—and prioritizing security is essential. End-to-end code signing and malicious code prevention are critical components of this.
Learn more in this blog post or reach out to our team today to explore how to protect your business from the execution of unauthorized code.