In today’s fast-paced digital economy, the ability to innovate and remain competitive relies on the strategies your organization uses to speed up the development process of new services and applications. To accomplish these seemingly Herculean tasks, developers are accelerating their continuous integration and continuous delivery (CI/CD) pipelines using new technologies—such as artificial intelligence (AI) and open source—that are challenging for security teams to vet.
This raises many concerns for security professionals who fear that their organization’s developers may be privileging speed over security. For example, developers are increasingly relying on open source building blocks without questioning whether those open source libraries they come from can be trusted. This type of behavior sets the stage for the ultimate conflict between development speed and security, as developers are under constant pressure to move so quickly that it’s almost impossible for security to keep up. In this scenario, security is viewed as a roadblock instead of a vital component of the modern production environment.
Security teams everywhere are taking a hard look at how to harness the power of open source and AI-powered development without exposing themselves to increased privacy risks and intellectual property threats. One of their major concerns is that while AI has the potential to make developers 1000x faster and more productive, it also has the potential to amplify risk at the same frightening pace.
The Generative AI Identity Crisis: Emerging AI Threatscapes and Mitigations
According to Kevin Bocek, chief innovation officer at Venafi, “Security teams are stuck between a rock and a hard place in a new world where AI writes code. Developers are already supercharged by AI and won’t give up their superpowers. And attackers are infiltrating our ranks—recent examples of long-term meddling in open source projects and North Korean infiltration of IT are just the tip of the iceberg.”
But are these concerns changing security behavior in organizations? In a new research report, Organizations Struggle to Secure AI-Generated and Open Source Code, Venafi found that 83% of organizations use AI to generate code, despite mounting security concerns. However, they are also concerned about open source and AI-powered development outpacing security. And some even go so far as expressing the need to ban AI code altogether.
Based on a survey of 800 security leaders across the U.S., U.K., France and Germany, the report explores the risks of AI-generated and open source code and the challenges of securing it amidst hyper-charged development environments. The report highlights the reasons why security leaders are concerned with AI and open source as well as the specific actions they believe can mitigate related risk.
First and foremost, the report reveals that 92% of security leaders have concerns about the use of AI-generated code within their organization. Here are some of the additional findings that break down the specifics of security leader concerns:
Security leaders feel at odds with developer teams over AI
- 72% feel they have no choice but to allow developers to use AI to remain competitive
- 63% have considered banning the use of AI in coding due to the security risks
Security leaders believe they are losing control of AI
- 66% say it is impossible for security teams to keep up with AI-powered developers
- 78% believe AI-developed code will lead to a security reckoning
- 59% lose sleep over the security implications of AI
Security leaders worry about unfettered use of open source code
- 86% believe developer use of open source code encourages speed rather than security
- 90% say their developers trust code in open source libraries
- 75% say it is impossible to verify the security of every line of open source code
The full report reveals why 92% of security leaders believe code signing should be used to ensure open source code can be trusted. “In a world where AI and open source are as powerful as they are unpredictable, code signing becomes a business’ foundational line of defense,” Bocek advises. “But for this protection to hold, the code signing process must be as strong as it is secure. It’s not just about blocking malicious code—organizations need to ensure that every line of code comes from a trusted source, validating digital signatures against and guaranteeing that nothing has been tampered with since it was signed. The good news is that code signing is used just about everywhere—the bad news is it is most often left unprotected by security teams who can help keep it safe.”
Download the report to learn why maintaining the code signing chain of trust can help your organization prevent unauthorized code execution, while also scaling your operations to keep up with developer use of AI and open source technologies. Venafi’s industry-first Stop Unauthorized Code Solution helps security teams and administrators maintain their code signing trust chain across all environments.
Machine Identity Security Summit 2024
Help us forge a new era of cybersecurity
☕ We're spilling all the machine identiTEA Oct. 1-3, but these insights are too valuable to just toss in the harbor! Browse the agenda and register now.