After a long campaign cycle, America has re-elected Donald Trump as the 47th President of the United States. In the wake of this news, many questions have bubbled to the surface, specifically about policy and regulation for artificial intelligence (AI).
Today, we’re exploring what that could look like under a second Trump term. Because AI is evolving quickly, and there are many risks inherent to not just the use of AI, but to AI models themselves.
Repealing Biden’s AI policy on day one
The Republican party has already indicated plans for significant changes to current AI policies, with the GOP platform arguing that current regulations risk hampering AI innovation and potentially infringe on free speech rights. Similarly, during a 2023 Iowa campaign rally, President Trump stated he would “cancel Biden’s AI executive order and ban the use of AI to censor the speech of American citizens on day one.”
What these changes might mean for AI development and regulation remains an area of ongoing discussion. To understand the potential impact, let’s first examine the current policy framework under Biden’s 2023 Executive Order (EO) on AI.
Recapping the Biden Executive Order on AI
Biden’s October 2023 Executive Order on “Safe, Secure and Trustworthy AI” is a sweeping, landmark document covering areas of security, collaboration, responsible use and more.
- Promoting AI leadership: Positions the US as a global leader in AI by fostering innovation and competition.
- Safety and security: Emphasizes the importance of mechanisms to assess and mitigate risks, including red-team test data sharing, establishing the AI Safety Institute (AISI) under NIST and building transparency into training and building models.
- Ethical and responsible use: Insists that AI aligns with democratic values, protects against risks to engineer dangerous bioweapons, safeguards Americans from fraud, works to prevent discrimination, bias and abuse (particularly in housing, healthcare and justice systems).
- International collaboration: Encourages cooperation to establish global norms and standards for AI governance, build robust frameworks, guide agency use and accelerate standards implementation.
- Protecting AI innovations: Promotes a fair, open and competitive AI sector, helping America maintain its lead in AI innovation.
- Infrastructure and talent: Emphasizes the need for investment in computational infrastructure and initiatives to attract and retain top-tier AI talent.
For more on the Biden Administration’s approach to AI regulation, you can also refer to the Oct. 24, 2024 Fact Sheet and Memorandum on a “Coordinated Approach to Harness the Power of AI for U.S. National Security.”
Increasing AI Regulation & What It Means for Open Source Innovation
Predicted changes under a Trump Administration
Although the Trump Administration’s plans for AI policy are currently unclear, we can infer a few things from recent GOP comments as well as President Trump’s first term in office.
Less regulation – a lighter touch
Based on what we know, companies can likely expect relaxed scrutiny on bias and discrimination in AI applications—perhaps what Senator Ted Cruz once dubbed “‘woke’ AI safety standards.”
Instead of sweeping regulation and reporting requirements, some predict a more industry-specific or case-by-case approach. Further evidence for this lies in 2020 guidance from the Office of Management and Budget, which stated that “Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.”
This statement gets to the heart of the matter. AI growth will likely be prioritized, with regulation as a secondary thought. That means that Silicon Valley can expect a shift to a laissez-faire approach to AI, with tech companies expected to self-regulate. But will they do so? Or will the engendering of rapid innovation overshadow safe development?
The answers to these questions are still up in the air, but according to a 2023 AI Policy Institute survey, 82% of voters don’t trust technology executives to self-regulate, with 72% preferring to slow down AI advancement, period.
Former Silicon Valley leaders see the issue, too. In fact, in the 2020 Netflix documentary, The Social Dilemma, one said, “I think we need to accept that it’s okay for companies to be focused on making money. What’s not okay is where there’s no regulation, no rules and no competition—and the companies are acting as sort of de facto governments.” This may have been said regarding social media, but it translates to AI advancement.
Greater focus on physical safety and military AI development
In addition to a lighter touch on regulation, some of the GOP want NIST to refocus their efforts on adversarial use of AI—specifically in the case of building bioweapons. Trump himself has stated that AI is a clear danger, but he has also stressed America’s need to stay ahead and capitalize on the technology.
Furthermore, the Washington Post has also reported on Trump’s allies drafting a new EO with the intent to continue the work the first Trump Administration started in 2019, particularly in military capacities through a series of “Manhattan Projects.” It also calls for the development of industry-led agencies to oversee AI models, rather than governmental bodies—which echoes similar sentiments as The Social Dilemma.
Cybersecurity implications for AI
Let’s steer away from that speculation for a moment and look at this through a broader lens of cybersecurity. Because in his previous time in office, Trump highlighted the need for robust cybersecurity, even having unveiled the country’s first national cybersecurity strategy in 15 years (at the time in Dec. 2017).
This policy emphasized the need to protect both the defense industrial base and critical infrastructure against malicious cyber actors, with priorities around identifying and prioritizing risk, building defensible government networks, deterring and disrupting malicious cyber actors, improving information sharing and sensing, and deploying layered defenses.
While this demonstrated a substantial commitment to cybersecurity, will the second Trump Administration take this into account as it pertains to AI? Or might the two areas contradict each other?
After all, the rapidly expanding AI sector and cybersecurity risks are intrinsically linked.
The inherent cybersecurity risks of artificial intelligence
AI models are complex and rife with risk if not secured appropriately. I, and many of my colleagues, have written at length about ways bad actors harness AI for nefarious purposes, as well as the dangers posed to AI models themselves.
For example, the OWASP Top 10 for LLMs sums it up well:
- Prompt injection
- Insecure output handling
- Training data poisoning
- Model denial of service
- Supply chain vulnerabilities
- Sensitive information disclosure
- Insecure plugin design
- Excessive agency
- Overreliance
- Model theft
You can learn more about the list in this blog, as well as the critical need for comprehensive machine identity security, including robust code signing operations to establish trust and provenance.
However, based on all those considerations, we must ask one more question.
If the Trump Administration seeks to reduce regulation to accelerate America’s AI innovation, could they be doing so at the risk of national cybersecurity?
Let’s look at it another way.
If companies work at hyper-speed to launch products—or proposed changes enable them to cut corners—critical vulnerabilities in models could be overlooked. Adversarial machine learning or even simple, unintentional mistakes could negatively affect millions of users—and not just Americans.
A lack of security considerations in the development of powerful AI models could have drastic global consequences, too.
What’s more troubling is that if these implications aren’t considered in military advancement, the repercussions of rogue AI could verge into nightmarish, sci-fi territory (an opinion shared by Hollywood director, James Cameron).
If federal oversight shifts, might it come back at the state level?
Many states like Colorado and Tennessee have already passed their own policies in the last couple of years, with Colorado requiring AI developers to use “reasonable care to protect consumers from risks of algorithmic discrimination.” Tennessee’s legislation, meanwhile, protects musicians and artists from unauthorized AI impersonation.
They’re far from alone, too. Even major tech personas like Elon Musk backed the contentious California AI bill that Governor Newsom ultimately shot down earlier this year.
Musk, Mark Zuckerberg, Sundar Pichai and Sam Altman have all advocated for some level of regulation on AI, with Altman saying, “If this technology goes wrong, it can go quite wrong…We want to work with the government to prevent that from happening.” Musk has similarly stated that there “should be a regulatory body” for overseeing AI to protect the public.
How can the USA balance AI innovation and regulation?
Only time will tell for sure where AI policy is headed, but I will say one thing. Come January 2025, the AI space will be one to watch. And it’s about to get a whole lot more interesting.
In the meantime, it’s important for all companies, whether developing AI or harnessing the power of other models, to implement robust security measures. To get started, you can learn more about AI model threats and recommended mitigations in the eBook linked below.