Advisories August 1, 2025

Privacy, Cyber & Data Strategy Advisory | Trump Administration’s AI Action Plan to Promote the U.S. AI Industry Through Deregulation, Expanded Infrastructure, and Diplomacy

Executive Summary
Minute Read
Our Privacy, Cyber & Data Strategy Team breaks down the three “pillars” of the Trump Administration’s AI action plan to foster innovation and growth domestically and abroad in the U.S. artificial intelligence sector.
  • Identify and reduce the federal regulatory barriers that impede AI development and deployment
  • Speed up federal permitting for data centers and remove regulatory blocks to large-scale AI and computing facilities
  • Take the lead in exporting U.S. AI technology and standards across the globe

President Donald Trump views artificial intelligence (AI) as a critical frontier for the United States, and on July 23, 2025, he unveiled his Administration’s AI policy agenda to support that vision. 

Winning The Race: America’s AI Action Plan and three accompanying Executive Orders (EOs) (Preventing Woke AI in the Federal Government, Accelerating Federal Permitting of Data Center Infrastructure, and Promoting the Export of the American AI Technology Stack) emphasize the goals of accelerated innovation through deregulation, expanded energy infrastructure and data center development, and global leadership for the export of U.S. AI technology and policy. 

The plan, recommends over 90 policy actions across three “pillars”: (1) Accelerate AI Innovation; (2) Build American AI Infrastructure; and (3) Lead in International AI Diplomacy and Security. It builds on Trump’s January 23, 2025 Executive Order (14179) promoting the removal of barriers to AI innovation through deregulation while rescinding former President Joe Biden’s October 20, 2023 Executive Order (14110) emphasizing that AI policy must align with civil rights and equity goals, avoid reinforcing discrimination, and ensure accountability through oversight, community engagement, and rigorous regulation. 

Trump spoke about his Administration’s new plan during a keynote address at an AI summit in Washington, D.C. before signing the EOs: “[M]y administration will use every tool at our disposal to ensure that the United States can build and maintain the largest, most powerful, and most advanced AI infrastructure anywhere on the planet. America needs new data centers, new semiconductor and chip manufacturing facilities, new power plants and transmission lines.”

The plan’s key technical authors are Michael Kratsios, director of the Office of Science and Technology Policy, and David Sacks, White House AI and cryptocurrency czar. During Trump’s first Administration, Kratsios served as U.S. chief technology officer and acting undersecretary for research and engineering at the Defense Department. Before entering government service, Kratsios was a principal at Thiel Capital and Peter Thiel’s chief of staff. 

Sacks is considered a member of the “PayPal Mafia,” a group of early PayPal employees, including Thiel and Elon Musk, who went on to launch or invest in other major technology companies. Trump appointed Sacks to the newly created AI and crypto czar role to guide national policy on these technologies. The appointment has been welcomed by the AI and crypto industries, where Sacks is seen as someone who emphasizes innovation over administrative rulemaking and regulatory overreach. 

In evaluating the Administration’s push for reduced AI regulation, it may be helpful to look at an analogous situation in the late 1990s and early 2000s, when the United States experienced a dramatic surge in technological innovation and internet-based business growth, commonly referred to as the dot-com boom. This period saw the rapid emergence of startups and digital platforms, fueled by venture capital and a widespread belief in the transformative power of the internet. One tenet from the dot-com boom subscribed to by many, including members of the PayPal Mafia, is that Congress should initially refrain from enacting significant regulatory or legislative constraints on this burgeoning technology sector. Many believe this was a key factor in the success of U.S. internet businesses, and Kratsios and Sacks believe similar space must be provided to AI businesses for them to succeed.

Pillar I: Accelerate AI Innovation
Reduce AI regulation and accelerate AI innovation

In emphasizing deregulation, the Trump Administration has signaled a decisive shift from previous federal government AI policy. The Administration argues that excessive red tape could stifle the private sector’s ability to lead in AI development. Pillar I calls on federal agencies to identify and reduce regulatory barriers that impede AI development and deployment. Specifically, the Office of Science and Technology Policy is tasked with gathering input from businesses and the public on existing federal regulations that hinder AI innovation and adoption, and the Office of Management and Budget is charged with coordinating with all federal agencies to “identify, revise, or repeal regulations, rules, memoranda, administrative orders, guidance documents, policy statements, and interagency agreements that unnecessarily hinder AI development or deployment.” 

Pillar I further lays out policy recommendations for a coordinated federal effort to accelerate domestic AI innovation and infrastructure. The Department of Commerce, in collaboration with the National Institute of Standards and Technology, is tasked with developing technical benchmarks and standards to guide safe and effective AI deployment. The National Science Foundation and the Department of Energy are expected to expand funding for foundational AI research and national AI research infrastructure, including high-performance computing resources.

In addition, Pillar I calls for rebalanced enforcement by the Federal Trade Commission (FTC) of AI businesses. The FTC will now reassess its enforcement approach to AI businesses, particularly investigations initiated under the Biden Administration. The goal is to ensure that FTC actions do not impose undue burdens on AI innovation or advance liability theories that could stifle technological progress. This includes a comprehensive review of existing FTC investigations, final orders, and consent decrees to determine whether they align with the Administration’s emphasis on reducing unduly burdensome regulations.

Open-source AI models, datasets, and testing 

Pillar I emphasizes the strategic importance of open-source and open-weight AI models. AI models that are freely available for download and modification are seen as critical drivers of innovation, particularly for startups and academic institutions. Open-source models are seen as essential for democratizing AI, enabling startups, academic institutions, and public-sector entities to experiment, adapt, and deploy AI systems without being locked into proprietary ecosystems. The plan directs the federal government to create a supportive environment for open models with the goal of lowering technical barriers to entry, accelerating scientific discovery, and promoting U.S. leadership in AI. 

In tandem with promoting open-source AI models, the plan is calling for a national effort to build the “world’s largest and highest quality AI-ready scientific datasets,” while maintaining respect for individual rights and ensuring civil liberties, privacy, and confidentiality protections. The plan sees high-quality data as a national strategic and economic asset for AI innovation.

Through the plan, the Administration also hopes to speed up the adoption of AI. The Administration believes critical sectors like health care have been hesitant to utilize AI’s full potential due to perceived regulatory complexity and unclear governance. The plan urges a coordinated effort by the federal government to establish regulatory sandboxes and AI centers that allow industries to deploy and evaluate AI tools that could help shift the dynamic towards a “try-first” experimentation culture that accelerates the integration of AI across American industry.

Invest in science, interpretability, and evaluation of AI

The plan asserts that maintaining U.S. leadership in AI innovation requires not only continued investment but a strategic focus on the most technically challenging areas of research. It identifies interpretability,1  control, and robustness as critical gaps in current AI capabilities, particularly in modern large language model systems when it may be difficult to explain why a model produced a specific output. This unpredictability poses significant barriers to deploying AI in national security and other high-stakes environments, when reliability is paramount and lives are at stake. To address this, Pillar I proposes a national AI evaluation ecosystem, enabling systematic assessment of AI model performance and safety. The Administration posits that such infrastructure would support both regulatory oversight and industry standards, laying the groundwork for responsible AI integration across sensitive sectors.

Pillar II: Build American AI Infrastructure
Modernizing power infrastructure, streamlining permitting, and developing a grid to match the pace of AI innovation

Pillar II recognizes AI is the first digital technology that challenges America to build vastly greater energy generation. As AI accelerates toward becoming a foundational technology, the plan focuses on the physical infrastructure required to support it, including energy systems, chip manufacturing plants, and data centers, as critical national priorities. The plan also asserts the current U.S. permitting and regulatory framework impedes timely development of new power generation. At the same time, the plan acknowledges that the nation’s aging electric grid is ill-equipped to meet the demands of AI-driven innovation. Commentators agree that as data centers and energy-intensive industries scale, the grid will face mounting pressure from both electrification and technological advancement. The plan calls for a comprehensive strategy to expand and modernize the grid.

To further support Pillar II’s policy recommendations on energy, Trump signed an EO directing the acceleration of federal permitting for data center infrastructure and aiming to remove regulatory blocks that slow the deployment of large-scale AI and computing facilities. It encourages financial incentives such as loans, grants, and tax benefits for the development of “qualifying projects” that involve over 100 megawatts of new electric load or at least $500 million in capital investment, including data centers, semiconductor manufacturing facilities, and energy infrastructure. To expedite these projects, the EO directs federal agencies to streamline environmental reviews, and to leverage existing permitting exemptions. It also authorizes the use of federal lands, including brownfield and Superfund sites, for infrastructure development, framing the initiative as essential to national prosperity and security.

In addition to regulatory streamlining, the EO revokes Biden’s January 14, 2025 Executive Order (14141) outlining a national strategy to build AI infrastructure, such as data centers and energy systems, on U.S. soil to safeguard national security, strengthen supply chains, and maintain global leadership in AI development while also advancing clean energy technologies and requiring environmental reviews to mitigate the impacts on surrounding communities. The focus on the environmental impacts of infrastructure stands in contrast to the current Administration’s approach. The Trump Administration argues that these changes are necessary to maintain U.S. leadership in AI and advanced manufacturing by accelerating the physical infrastructure needed to support next-generation technologies. 

Securing the AI ecosystem: strengthening infrastructure, promoting secure-by-design technologies, and enhancing federal incident response

The plan recognizes that AI systems are becoming more advanced in software engineering and code generation, and that their role in cybersecurity is rapidly evolving. AI is increasingly seen as a powerful tool for both cyber offense and defense. AI-enabled defensive tools can help organizations detect vulnerabilities, respond to attacks in real time, and adapt to new threat vectors. However, the integration of AI into sensitive systems also introduces new risks. AI systems themselves can become targets, vulnerable to adversarial inputs such as data poisoning and privacy attacks that can degrade performance or compromise functionality. Pillar II demands that AI systems used in national security applications must be secure-by-design, meaning they are engineered for resilience, equipped to detect performance anomalies, and capable of flagging malicious activity. To prepare for the increased use of AI in cybersecurity, Pillar II urges the U.S. government to promote the development and incorporation of AI incident response actions into existing incident response doctrine and best practices for both the public and private sectors.

Pillar III: Lead in International AI Diplomacy
The plan posits that to maintain global leadership in AI, the United States must invest domestically but also actively promote the international adoption of American AI systems, hardware, and standards. The plan prioritizes international diplomacy and national security by promoting collaboration with trusted allies and strategic engagement with adversaries. It explicitly identifies China as the primary technological rival, naming no other country with similar specificity or concern.

Export American AI to allies and partners

Pillar III and the EO on exporting U.S. AI technology abroad establish a national strategy to promote the global adoption of American AI technologies, including hardware, models, software, applications, and standards, to allied nations. The EO specifically recognizes AI as a cornerstone of future economic growth and national security and directs the Department of Commerce to launch the American AI Exports Program. The AI Exports Program will support the global deployment of comprehensive U.S.-origin AI technology packages. The Secretary of Commerce will issue a public call for proposals from an industry-led consortium, requiring submission solutions for technologies ranging from hardware and cloud infrastructure to AI models, cybersecurity measures, and sector-specific applications. Proposals must also identify specific “target countries or regional blocs for export engagement.”

Counter Chinese influence and strengthening export controls

Pillar III prioritizes efforts to counter Chinese influence in shaping global AI governance frameworks. As China seeks to embed surveillance-oriented standards, particularly in facial recognition, into international norms, the United States is working to make sure that “like-minded nations” are working together to encourage the development of AI in line with shared values. Central to this strategy is the strengthening of export control protections for advanced AI computing. To achieve this, Pillar II directs the United States to implement “creative approaches” to export-control enforcement on sensitive technologies, particularly in semiconductor manufacturing and AI infrastructure, closing existing regulatory gaps and preventing adversaries from leveraging American innovations. The plan further directs the United States to encourage allies to adopt similar restrictions and avoid backfilling technologies that have been restricted by the United States by utilizing tools such as the Foreign Direct Product Rule and secondary tariffs to achieve greater international alignment.

EU and other countries unlikely to reshape regulation

Exporting U.S. AI technologies and policies globally will face significant challenges due to the diverse and often stringent international regulatory environments. Many countries and regional blocs operate under legal frameworks that place a greater emphasis on data privacy, cybersecurity, and algorithmic accountability than what is proposed in the plan. These differences could create friction, potentially slowing or complicating the success of Pillar III, particularly when U.S. standards diverge from existing legal requirements. A prime example is the European Union (EU), where the GDPR and the EU AI Act impose strict obligations on EU citizens’ personal data and AI systems, including transparency, risk classification, and data governance. Any U.S.-based AI export package targeting the EU or other similar regulatory regimes will likely encounter rigorous legal scrutiny for compliance with those frameworks. 

Framing Policy Recommendations and Congressional Action

It is important to note that the three pillars are policy recommendations by the Administration. The successful implementation of the plan and the Executive Orders will require substantial federal agency coordination and legislative action from Congress. While the Administration can initiate strategic direction and interpretation of regulatory frameworks, many of the plan’s core components, such as funding for AI infrastructure, enforcement of export controls, and support for international AI diplomacy, will necessitate new appropriations and statutory authority from Congress.

Congress has already taken its first steps to support the Administration’s changes in AI policy by passing H.R.1, the One Big Beautiful Bill Act, which appropriates $150 million to the Department of Energy for AI efforts. Under Section 50404 of the Act, the Secretary of Energy is directed to mobilize national laboratories to partner with private industry sectors within the United States to curate the department’s scientific data so that the data is structured, cleaned, and preprocessed in a way that makes it suitable for use in AI systems and to initiate efforts for developing AI models for science and engineering that will accelerate AI innovation.

Conclusion
The plan marks a significant shift in U.S. federal policy toward a less-regulated AI environment, a move that is likely to generate political challenges for the Administration, particularly in areas concerning privacy, consumer protections, and civil liberties. Critics may argue that scaling back oversight, especially by agencies like the FTC, could weaken safeguards at a time when AI technologies are rapidly integrating into nearly every area of American life. Nonetheless, the plan’s emphasis on open-source development and broader public access to advanced AI models may unlock new opportunities for innovation, entrepreneurship, and global competitiveness.

As the AI policy landscape continues to evolve, we will monitor developments closely and assess their broader implications. 


AI models can now operate with billions of parameters that have developed intricate patterns learned from vast datasets, making it challenging to trace specific outputs back to clear, human-understandable logic, known as the “interpretability” of the model.

Ransomware Fusion Center

Stay ahead of evolving ransomware threats with Alston & Bird's Ransomware Fusion Center. Our Privacy, Cyber & Data Strategy Team offers comprehensive resources and expert guidance to help your organization prepare for and respond to ransomware incidents. Visit Alston & Bird's Ransomware Fusion Center to learn more and access our tools.

 


If you have any questions, or would like additional information, please contact one of the attorneys on our Privacy, Cyber & Data Strategy team.

You can subscribe to future advisories and other Alston & Bird publications by completing our publications subscription form.


Meet the Authors
Media Contact
Alex Wolfe
Communications Director

This website uses cookies to improve functionality and performance. For more information, see our Privacy Statement. Additional details for California consumers can be found here.