Back in November 2023, when President Biden passed the Executive Order on AI, some industry experts found it to be a sweeping, generalized effort lacking practical guidance and guardrails.
Well, the Department of Commerce seeks to change that, having unveiled a series of actions aimed around implementation of the EO. This post will take a look at them, including four new publications from NIST, as well as a new AI challenge. Finally, we'll stress the importance of machine identity security in protecting your GenAI systems.
A brief refresh on Biden's Executive Order
The Executive Order on AI was hailed as a "landmark" piece of regulation aimed at allowing Americans to harness the promise of AI while managing its inherent risks, particularly in cybersecurity.
You can read more about its primary pillars here.
The Department of Commerce—through new documentation, standards and evaluation—intends to cement a commitment to transparency and responsible innovation in AI technologies.
Department of Commerce announces new actions
Outlined in announcements from both the National Institute of Standards and Technology (NIST) and the Department of Commerce itself, these actions are multifaceted.
New draft guidance from NIST
These four draft publications are meant to improve the "safety, security and trustworthiness" of AI systems.
NIST AI 600-1: The AI Risk Management Framework: Generative AI Profile
Intended as a companion resource for NIST's larger AI Risk Management Framework, NIST AI 600-1 helps organizations identify unique risks and proposes actions for mitigation. It was developed based on input from more than 2,500 members of a GenAI-focused working group, centers on more than a dozen AI risk types and offers over 400 mitigation recommendations.
NIST SP 800-218A: Secure Software Development Practices for Generative AI and Dual-Use Foundation Models
This document is meant to help address concerns around malicious training data, offering guidance relative to data collection, bias prevention and tampering avoidance.
NIST AI 100-4: Reducing Risks Posed by Synthetic Content
As deepfakes become more realistic, the ability to discern fact from fiction becomes difficult. The goals of this document are to offer technical approaches for promoting digital transparency, including provenance and detection of synthetic content, i.e. watermarking, metadata and other authentication mechanisms.
NIST AI 100-5: A Plan for Global Engagement on AI Standards
To drive worldwide development and implementation of AI-related standards, this publication asks for feedback regarding AI content origination, system testing, evaluation, verification, validation and more.

The Generative AI Identity Crisis: Emerging AI Threatscapes and Mitigations
NIST issues GenAI challenge
Along with the four draft publications, NIST announced a new challenge series to help with the evaluation and measurement of GenAI technologies, guiding the safe and responsible use of digital content. For example, one goal is to help people determine whether content was made by a human or an AI.
U.S. Patent and Trademark Office requests comments on AI impact
NIST isn't the only organization getting in on the action. The USPTO is also seeking feedback on how "AI could affect evaluations of how the level of ordinary skills in the arts are made to determine if an invention is patentable under U.S. law." In fact, earlier in 2024, they released new guidance on whether someone could patent an AI-assisted invention.
You can learn more about their request for public comment here.
Why machine identity security is critical to GenAI protection
The emergence of GenAI systems heralds a new frontier in both innovation and cyber threats. To ensure your systems, including all the code comprising an AI model, are secure, you must be able to authenticate it.
Machine identity security plays this role, helping to thwart unauthorized access and tampering.
By maintaining visibility and control of every machine identity in your organization, you can quickly identify the unique versions and instances of the AI models being used. If one specific instance of a particular version starts acting outside its predefined parameters, you can easily pull the plug, so to speak.
In short, machine identity security is the "kill switch," for AI model behavior.
A foundational framework for the future
The Department of Commerce has signaled the intent to help Americans, and the rest of the globe, to both understand and safely integrate GenAI into their organizations.
And if we’ve learned anything from this recent announcement, it’s clear that we're standing at the precipice of a new era in cybersecurity. If you'd like to further explore these changes, and their impacts, I encourage you to join us in Boston for the upcoming Machine Identity Security Summit, Oct. 1-3.
Because it's not just about keeping pace with security: it's about leading the charge and redefining what cybersecurity means, so we can build a safe, secure digital future.