Organizations Struggle to Secure AI-Generated and Open Source Code
Are your developers moving too fast to maintain trust?
We surveyed 800 security leaders to better understand their concerns around the use of AI-generated and open source code in their production environments, as well as what they believe are their best options for mitigating risk.
Is 1000x DevOps productivity worth 1000x the risk?
AI and open source code are supercharging development speed, but InfoSec leaders are worried about how to keep up.
These leaders also realize that restricting the use of AI and open source could impact their company’s competitive advantage, and that only intensifies the tug of war between security and velocity.
But why do AI-generated and open source code have InfoSec so on edge? And how can you safely leverage such powerful technology without increasing risk? Read the full report to find out.
91%
are worried about the speed of development
86%
say open source code encourages speed over security
92%
are concerned about developers using AI to generate code
Key Findings: A Snapshot
About this Report
Organizations Struggle to Secure AI-Generated and Open Source Code dives into InfoSec’s rising concerns about the speed of development, specifically focusing on challenges in preventing threats from AI-generated and open source code.
To gain a clear picture of this rapidly evolving problem space, Venafi sponsored an independent survey of 800 security leaders across the U.S., U.K., France and Germany.
The compiled data uncovered key insights, all of which answer these four primary questions:
Why are InfoSec leaders worried about the speed of development?
How readily do teams trust open source code libraries—and why?
To what extent are developers using AI to generate code?
What are InfoSec’s current sentiments, as well as main concerns, on AI-generated code?
Start Reading Now