The European Union's legislative body has sanctioned the first substantial framework of rules to oversee artificial intelligence on a global scale. This legislative move was spurred by the swift progression in AI technologies, which have sparked various concerns, including their effects on employment, personal privacy, and the rights of individuals.
This regulatory framework is designed to safeguard fundamental human rights, uphold democratic values, ensure the rule of law, and protect environmental integrity against the potential dangers of high-risk AI systems. It also sets forth requirements for AI systems, tailored to the level of threat and influence they may pose.
In its approach, the legislation categorizes AI according to the degree of risk it presents, from "unacceptable" to high, medium, and low threat levels. It delineates a clear distinction between uses of AI that are considered unacceptable and those that carry high, medium, or low risks. The regulatory package prohibits AI practices that could endanger the rights of citizens, including biometric classification systems, the use of emotion detection in professional settings, and AI that could manipulate human behavior. AI systems and models intended for general use must adhere to EU copyright laws and provide comprehensive summaries of the training data utilized.
According to Dragos Tudorache, EU Parliament Member, “We have not yet witnessed the full power of AI. Existing capabilities have already surpassed anything we could have imagined and this will continue, exponentially. AGI is something that we need to prepare for.”
He also stated that, “The AI Act has pushed the development of AI in a direction where humans are in control of the technology, and where the technology will help us leverage new discoveries for economic growth, societal progress, and to unlock human potential.”

The Generative AI Identity Crisis: Emerging AI Threatscapes and Mitigations
Kevin Bocek, Chief Innovation Officer at Venafi notes, “This law aims to ensure the safety, transparency, non-discrimination, and traceability of AI so that it isn’t exploited or used for malicious means by adversaries. The great thing about the EU’s AI Act is that it aligns AI models identities, akin to human passports, subjecting them to a Conformity Assessment for registration on the EU’s database.”
Unlike other regulatory efforts, the EU AI Act outlines consequences for organizations that break the rules. Noncompliant organizations may be hit by fines ranging from 7.5 million to 35 million euros, depending on the infringement and size of the company.
Armand Ruiz, AI Director at IBM wrote that “The EU AI Act is a risk-based approach, meaning that it places stricter requirements on AI systems that pose a higher risk to human health, safety, and fundamental rights.” Ruiz also presented the following simplified list of prohibited AI and key requirements for high-risk AI.
Prohibited AI Systems
- Social credit scoring systems
- Real-time biometric identification in public places (except for limited, pre-authorized situations)
- AI systems that exploit people's vulnerabilities (e.g., age, disability)
- AI systems that manipulate people's behavior or circumvent their free will
𝗞𝗲𝘆 𝗥𝗲𝗾𝘂𝗶𝗿𝗲𝗺𝗲𝗻𝘁𝘀 𝗳𝗼𝗿 𝗛𝗶𝗴𝗵-𝗥𝗶𝘀𝗸 𝗔𝗜
- Conducting a fundamental rights impact assessment and conformity assessment
- Registering the system in a public EU database
- Implementing a risk management and quality management system
- Implementing data governance measures (e.g., bias mitigation, representative training data)
- Providing transparency (e.g., Instructions for Use, technical documentation)
- Ensuring human oversight (e.g., explainability, auditable logs, human-in-the-loop)
- Ensuring accuracy, robustness, and cybersecurity (e.g., testing and monitoring)
Harness the power of AI to slay machine identity complexity in seconds
“As governments around the world grapple with how best to regulate a technology of growing influence and import, none have touted the possibility of AI identity management, Bocek warns. “The EU’s regulation, by far the most fully formed, says each model must be approved and registered—in which case it naturally follows that each would have its own identity. This opens the door to the tantalizing prospect of building a machine identity-style framework for assurance in this burgeoning space.”
Bocek notes that there’s plenty still to work out. But assigning each AI a distinct identity, would enhance developer accountability and foster greater responsibility, discouraging malicious use. Doing so with machine identity isn’t just something that will help protect businesses in the future, it’s a measurable success today. And more broadly it would help to enhance security and trust in a technology so far lacking either. It’s time for regulators to start thinking about how to make AI identity a reality.