As security professionals and threat actors alike capitalize on AI, governments around the world are beginning to make headway on AI regulation.
But these new acts and executive orders bring up one crucial question: Could too much regulation impede innovation, especially when it comes to the way open source AI tools are adopted, adapted, and refined?
Find out what governing bodies in the EU and US plan to do to regulate the rapidly expanding AI space, and how these guidelines may impact open source development of AI going forward.
EU & US: AI Regulations
The EU and US have already released their initial separate plans for AI regulation, including the EU’s AI Act and the US’s Executive Order on AI. They’ve also recently joined forces with other countries to safely guide AI usage and development.
EU AI Act
Policymakers passed the EU AI Act on Friday, December 8, 2023. It stands as the world’s first regulation on artificial intelligence.
It's designed to ensure better conditions for the development and use of AI technologies. This regulation states that AI can be used in different applications, but governing bodies must classify and organize those intended use cases according to the risk posed to end users.
The level of regulation depends on the level of risk. This is meant to ensure safety, transparency, traceability, non-discriminatory practice, and environmental sustainability. The EU also implores that people must oversee AI. That they aren’t “set it and forget it” tools.
The designated levels of threat classification for AI systems are:
- Unacceptable risk: Present threats to humans and will be banned.
- High risk (split into two sub-categories): Must be assessed before allowing go to market due to potential adverse effects on safety and fundamental rights. These systems must be assessed prior to market launch—and regularly throughout their lifecycle.
- Products that fall under existing safety legislation in the EU (toys, aviation, cards, medical devices, lifts)
- 8 specific areas that must be registered in an EU database. You can read more about these here.
- Limited risk: These AI systems should comply with minimal transparency requirements to allow end users to make their own decisions about using a product.
A note on generative AI: The EU AI act states that 1) content generated by an AI must be disclosed; 2) models must be designed in a way that prevents them from generating illegal content; 3) companies must publish summaries of the copyrighted data they used when training any model in question.
US Executive Order on AI
The US Executive Order (EO) on AI is a “landmark Executive Order” that seeks to “ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence.”
It is a wide-sweeping EO that requires developers of AI systems to share safety results with the U.S. government; develop standards, tools, and tests to ensure systems are safe, secure, and trustworthy; protect Americans from AI-enabled fraud and deception, and establish an advanced cybersecurity program to develop AI tools and find and fix vulnerabilities in critical software.
Let’s unpack each of those a little further.
Share safety results with the U.S. government
The EO will require any companies developing foundational models that pose risks to national security or safety to notify the federal government during training, and they must share the results of all red-team safety tests.
Develop standards, tools, and tests to ensure systems are safe, secure, and trustworthy
For the standardization bullet of the EO, Biden has split it across three key areas:
- NIST will set rigorous standards for red-team testing to ensure model safety before a public release.
- The Department of Homeland Security (DHS) will apply those standards to critical infrastructure and establish an AI Safety and Security Board.
- The Department of Energy (DOE) and DHS will address threats to critical infrastructure as well as chemical, biological, radiological, nuclear, and cybersecurity risks.
Protect Americans from AI-enabled fraud and deception
In this section, the EO calls for guidance around content authentication and watermarking to label AI-generated content, stating that it'll help assuage public concerns about falsified government communications.
Establish an advanced cybersecurity program to develop AI tools and find and fix vulnerabilities in critical software
Expanding on the foundation provided by the August 2023 AI Cyber Challenge, the White House seeks to use AIcapabilities to further secure the nation’s software and networks.
Joining forces: Guidelines for secure AI system development
As of November 27, 2023, agencies from the US, UK, and 16 other countries have endorsed all-new guidelines on AI cybersecurity.
These guidelines, based on the “secure by default” approach defined by NSCS, NIST, and CISA, are written for the providers of “any systems that use artificial intelligence, whether those systems have been created from scratch or built on top of tools and services provided by others.”
The guidelines are meant to help all stakeholders involved in the use of AI to make more informed decisions about the secure, responsible design, development, deployment, and operation of AI systems.
They prioritize:
- Taking ownership of security outcomes on behalf of customers
- Embracing radical transparency and accountability
- Building organizational structure by leading with a “secure-by-design” approach
The impact of increasing regulation on open source development
Now that we’ve covered what regulations and guidelines are already out there for the EU and US, let’s move onto the potential impact on open source development.
At its heart, open source development relies on transparent community collaboration, and presents significant opportunities for innovation and the betterment of society. The open source model works because it democratizes software, allowing the community to transparently take existing code and extend/improve it.
Given the space to collaborate and innovate, the open source community has the power to increase AI adoption, accelerate innovation, and enable smaller, otherwise disenfranchised groups and individuals to engage in an exploding area currently dominated by proprietary tech (like Microsoft and OpenAI).
AI Regulation: Too much of a good thing?
While many industry professionals agree that some AI regulation can be beneficial, too much too quickly can stifle crucial research and understanding.
For example, the intended regulation presented in the one-size-fits-all EU AI Act, seems to do exactly that. As Data Innovation details:
“The EU AI Act would impose the same stringent requirements on open-source foundation models as it does on closed-source models. The proposed bill states that ‘[a] provider of a foundation model shall, prior to making it available on the market or putting it into service, ensure that it is compliant with the requirements set out in this Article, regardless of whether it is provided as a standalone model or embedded in an AI system or a product, or provided under free and open source licenses, as a service, as well as other distribution channels.’ The obligations for these models include risk mitigation strategies, data governance measures, and a ten-year documentation requirement, among others and put the power back into the hands of those proprietary companies, placing greater legal liability on open source models and, overall, undermine their development—further preventing research that’s critical to the public’s understanding of AI.”
Opening an even bigger can of worms
Open source innovation is critical to helping the world see what is and isn’t possible with AI, and helps us more quickly discern safe, responsible practices. Too much stringent regulation can put a serious damper on the fast-moving, iterative, flexible nature of open source development—not to mention open up an all-new can of worms when trying to regulate open source models further downstream.
For instance:
- Does a teenager in their bedroom building their own open source model have to commit to 10 years of documentation?
- Does the expanded version of a foundational model still qualify as the foundational model—or is it a brand-new one?
- Who pays for the auditing and risk mitigation involved in an open source model?
Not only does the EU AI regulation seem to raise more questions than it answers, it also has a greater impact for the open source community as a whole, and could, as a result, make the EU less competitive in the AI space. Matt Barker, Global Head of Cloud Native Services at Venafi, expresses concern that this act, coupled with the EU’s Cyber Resilience Act, could have even more far-reaching consequences.
“There must be more clarity in the Act’s language around liability. Otherwise, people writing open source code in the EU could down tools, as the stakes are simply too high.”
And if the US isn’t careful about the wide-sweeping approach they’re currently taking, they, too, could face similar challenges.
Harness the power of AI to slay machine identity complexity in seconds
How machine identity management fosters innovation, while still helping to enable transparency and trust in open source AI
Open source has long been paving the way for the AI revolution. For example, many of the most popular proprietary models (including OpenAI’s tools) are run on Kubernetes infrastructure. But to make sure that open source code, whether upstream or down, hasn’t been tampered with, you must be able to know if you can trust it.
That’s where machine identity management comes in. After all, without it, a secure AI future is just a pipe dream.
How do machine identities play a role? At their most basic, AI and ML models are code, which are just machines. That means they can be identified and authenticated to help make sure that AI, including open source AI, hasn’t been tampered with or poisoned by a bad actor.
Digital keys and certificates will only become more important as AI becomes ubiquitous in our business world, and they're critical to empowering the open source community to safely, responsibly innovate and collaborate.
The open source debate will drive the future of AI, but heavy regulation can hold it back.
“The Crypto Wars of the 1990s showed that open research and commercial innovation would lead to greater privacy. What is important to keep in mind is that we should not race to create new regulation that slows down innovation and requires official certification—treating AIs today like weapons or pharmaceuticals. We need to promote research and innovation to achieve outcomes of standards, security and safety instead of racing to apply rules and regulations from the last century. Technologies today from modern machine identity management to code signing can be used to operate AI safely and promote innovation.” - Kevin Bocek, Venafi, VP of Ecosystem and Community
As Kevin implores, the way forward requires a careful balance between innovation and regulation, and the world’s superpowers must ensure that they don’t block open source research and exploration with rigid mandates that can stifle needed understanding of artificial intelligence, and bring about a future lack of security and safety.
That’s why, here at Venafi, we’re a huge proponent of open source and ongoing innovation. We’re major supporters of the CNCF, we contribute to Hugging Face, and we understand that the potential open source collaboration provides is simply unmatched.
But to innovate safely and responsibly, you must ensure that the code you’re using can be trusted. That’s where Venafi can help. Our purpose-built code signing solution can help you secure your operations from emerging AI threatscapes, and the Venafi Control Plane for Machine Identities can help provide total visibility and automated control over your rapidly growing inventory of machines—including AI and ML models.
Why Do You Need a Control Plane for Machine Identities?
Related posts