Introduction

On 20 January 2025, President Donald Trump rescinded former President Joe Biden’s 2023 executive order on artificial intelligence (AI), which had established comprehensive guidelines for developing and deploying AI technologies. While this action signals a shift in federal policy, its immediate impact on the AI landscape is minimal for several reasons.  

AI Key Initiatives

Biden’s executive order initiated extensive studies across federal agencies to assess AI’s implications on cybersecurity, education, labor, and public welfare. These studies have been completed, and their findings remain available to inform governmental and private sector strategies. The revocation of the order does not negate the value of these assessments, which continue to shape AI policy and development at multiple levels of government.  

Continuity in AI Policy Framework

Many of the principles outlined in Biden’s order were extensions of policies from Trump’s first term, emphasizing AI innovation, safety, and maintaining U.S. leadership. This continuity suggests that the foundational approach to AI governance remains stable despite the order’s formal rescission. The bipartisan nature of these principles ensures a relatively predictable policy environment for AI development.  

State AI Regulations Remain in Play

While the executive order dictated federal policy, state-level AI legislation has always been more complex and detailed. States such as California, Illinois, and Colorado have enacted AI-related laws covering data privacy, automated decision-making, and algorithmic transparency. Additionally, several states have passed or have pending bills to regulate AI in employment, financial services, and law enforcement.  

For example:  

  • California’s AI Regulations: Under these regulations, businesses using AI-driven decision-making tools must disclose how personal data is processed, and consumers have the right to opt out of automated profiling. Also, developers of generative AI systems or services must publicly post documentation on their website about the data used to train their AI.
  • Illinois’ The Automated Decision Tools Act (pending): Deployers of automated decision tools will be required to conduct annual impact assessments. They must also implement governance programs to mitigate algorithmic discrimination risks.
  • Colorado’s Consumer Protections for Artificial Intelligence: This legislation requires developers of high-risk AI systems to take reasonable precautions to protect consumers from known or foreseeable risks of algorithmic discrimination.

Even without a federal mandate, these state regulations ensure that AI governance remains a priority, adding layers of compliance for businesses operating across multiple jurisdictions.  

Industry Adaptation and Ongoing AI Initiatives

The tech industry had already begun adapting to the guidelines outlined in Biden’s executive order. Many companies established internal AI governance frameworks, anticipating regulatory scrutiny. These proactive measures will persist as organizations recognize that self-regulation remains the best way to mitigate risks and maintain consumer trust.  

Threat Actors Will Exploit AI Vulnerabilities

One thing that has remained true for decades is that threat actors will attack any technology they can. Cybercriminals have always exploited vulnerabilities for financial or strategic gain, whether on mainframes, floppy disks, early personal computers, cloud environments, or mobile devices. AI is no different.  

Adversarial attacks against AI models, data poisoning, and prompt injection threats continue to evolve. Today’s key difference is whether organizations deploy purpose-built security measures to protect AI systems from these emerging threats. Regardless of federal policy shifts, the need for AI security remains constant.  

Anticipated AI Regulatory Environment

The revocation of Biden’s order aligns with the Trump administration’s preference for a less restrictive regulatory framework. The administration aims to stimulate innovation by reducing federal oversight. However, state laws and industry self-regulation will continue to shape AI governance.  

Existing privacy and cybersecurity regulations—such as GDPR, CCPA, HIPAA, and Sarbanes-Oxley—still apply to AI applications. Organizations must ensure transparency, data protection, and security regardless of changes at the federal level.  

Global Competitiveness and National Security

Both the Biden and Trump administrations have emphasized the importance of U.S. leadership in AI, particularly concerning global competitiveness and national security. This shared priority ensures that strategic initiatives to advance AI capabilities persist, regardless of executive orders. The ongoing commitment to AI innovation reflects a national consensus on the technology’s critical role in the economy and defense.  

Conclusion

While President Trump’s revocation of Biden’s AI executive order may seem like a significant policy shift, little has changed. AI security threats remain constant, industry best practices endure, and existing privacy and cybersecurity regulations continue to govern AI deployments.  

Moreover, state-level AI legislation ensures that regulatory oversight persists, often in more granular detail than federal mandates. Businesses must still navigate compliance challenges, particularly in states with active AI regulatory frameworks.  

Ultimately, despite the headlines, AI innovation, security, and governance in the U.S. remain on the same trajectory. The challenges and opportunities surrounding AI are unchanged—the focus should remain on securing AI systems and responsibly advancing the technology.