01-31-2025

Removing Barriers to American Leadership in Artificial Intelligence

Executive OrderView the Original .pdf

The 1-Minute Brief

What: Executive Order 14179 revokes a previous executive order (E.O. 14110) that established safety, security, and fairness regulations for Artificial Intelligence. This new order directs federal agencies to dismantle those regulations and develop a new action plan focused on accelerating AI innovation to ensure U.S. global dominance in the field.

Money: The order does not appropriate new funds. Its financial impact will be indirect, shifting federal resources away from AI risk management and regulatory compliance towards promoting AI development and removing barriers to innovation.

Your Impact: The most likely direct effect is a faster rollout of new AI technologies with fewer government-mandated safeguards. This could speed up innovation but also increase risks related to biased algorithms, privacy, and consumer protection that the previous order sought to mitigate.

Status: Signed and issued by the President on January 23, 2025. Federal agencies are now acting to implement its directives.


What's Actually in the Bill

Executive Order 14179 fundamentally redirects the federal government's approach to Artificial Intelligence. It cancels the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" policy from the previous administration and replaces it with a new policy focused on "America's global AI dominance." The order instructs federal agencies to suspend, revise, or rescind all actions taken under the old policy.

Core Provisions:

  • Revokes Previous Policy: Immediately revokes Executive Order 14110, which mandated broad safety and security testing, civil rights protections, and risk assessments for AI.
  • Develop New Action Plan: Requires the President's science, technology, national security, and economic advisors to create a new AI action plan within 180 days of the order's issuance.
  • Review and Rescind: Directs all federal agencies to review actions taken under the revoked E.O. 14110 and eliminate any that hinder the new policy of promoting AI dominance.
  • Revise OMB Guidance: Orders the Office of Management and Budget (OMB) to revise two key memoranda (M-24-10 and M-24-18) within 60 days to align them with the new, less restrictive policy. These memos concern how federal agencies govern, manage risk, and procure AI systems.

Stated Purpose (from the Sponsors):

The order states its purpose is to ensure the U.S. remains the global leader in AI innovation.

  1. To develop AI systems that are free from "ideological bias or engineered social agendas."
  2. To revoke existing policies that "act as barriers to American AI innovation."
  3. To sustain and enhance America's global AI dominance to promote "human flourishing, economic competitiveness, and national security."

Key Facts:

Affected Sectors: Technology, Government, Defense, Healthcare, and any industry developing or using high-impact AI systems.
Timeline: A new AI Action Plan is due by July 22, 2025. Revisions to OMB guidance were due by March 24, 2025.
Scope: The order applies to all executive departments and agencies, shaping how the entire federal government develops, buys, and uses AI.


The Backstory: How We Got Here

Timeline of Events:

The Era of Guardrails (2023-2024):

The previous administration viewed the rapid advancement of AI with both excitement and alarm, warning it held "extraordinary potential for both promise and peril". This led to a focus on building regulatory "guardrails."

  • October 30, 2023: President Biden signed Executive Order 14110, "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." It was a comprehensive, government-wide effort to manage AI's risks. The order established policies to protect against AI-driven bias, support workers, protect consumers, and ensure privacy.
  • March 28, 2024: The Office of Management and Budget (OMB) released Memorandum M-24-10, directing agencies on how to govern and manage risks from AI, particularly for systems that could impact citizens' rights or safety.
  • October 3, 2024: The OMB followed up with Memorandum M-24-18, which set specific rules for how federal agencies must procure AI systems, requiring vendors to provide documentation on data, testing, and risk mitigation.

Why Now? The Political Calculus:

  • Change in Philosophy: The introduction of E.O. 14179 reflects a fundamental shift in governing philosophy. The current administration prioritizes speed and competition, particularly with China, over the previous focus on regulation and safety.
  • Industry Influence: The new policy aligns with arguments from some in the tech industry that excessive regulation stifles innovation and hinders the ability of the U.S. to compete globally.
  • Fulfilling a Mandate: The order is a swift and direct reversal of a key policy of the preceding administration, framing the prior regulations as "burdensome" and ideologically driven. This move caters to a political base that supports deregulation and a more aggressive economic posture.

Your Real-World Impact

The Direct Answer: This directly affects the tech industry by removing regulatory hurdles and indirectly affects all Americans by changing the safety and fairness standards for AI used in areas like hiring, lending, and law enforcement.

What Could Change for You:

Potential Benefits:

  • Faster Innovation: A less regulated environment could allow U.S. companies to develop and deploy new AI technologies more quickly, potentially leading to new products and services.
  • Economic Growth: Proponents argue that unfettered AI development will boost the economy and maintain America's technological leadership.

Possible Disruptions or Costs:

Short-term (1-2 years):

  • Fewer Protections: The elimination of mandatory risk assessments and bias testing could lead to the deployment of AI systems with undetected flaws, potentially resulting in discriminatory outcomes in housing, employment, or credit applications.
  • Increased Misinformation: Reduced emphasis on content authentication standards could make it harder to combat AI-generated deepfakes and misinformation, especially in political campaigns.

Long-term:

  • Job Displacement: The previous order directed studies on AI's impact on the workforce; the new focus on rapid deployment could accelerate job displacement without corresponding support for affected workers.
  • Privacy Erosion: The prior framework called for tools to protect Americans' privacy from AI-powered data collection. A focus on deregulation may weaken these privacy safeguards.

Who's Most Affected:

Primary Groups: AI developers, technology companies, and federal agencies that procure AI systems.
Secondary Groups: Consumers, workers in industries susceptible to automation, and civil rights organizations concerned about algorithmic bias.
Regional Impact: Tech hubs like Silicon Valley, Seattle, and Austin may experience accelerated economic activity, but the societal impacts of AI deployment will be nationwide.

Bottom Line: The order trades the previous administration's safety-focused regulations for a policy of accelerated, market-driven AI development aimed at global competitiveness.


Where the Parties Stand

Republican Position: "A Light Touch for Innovation"

Core Stance: The government should have a "light touch" to avoid stifling innovation and ensure America wins the global AI race.

Their Arguments:

  • ✓ Removing burdensome regulations will unleash the free market and allow entrepreneurs, not government, to shape the future of AI.
  • ✓ The U.S. must prioritize competition with China, and overregulation puts American companies at a disadvantage.
  • ✗ They oppose what they term "ideological" requirements, such as mandating diversity and equity considerations in AI models. They have also sought to block states from passing their own, more stringent AI laws, arguing for a single national standard that favors innovation.

Legislative Strategy: Support the executive order's deregulation. House Republicans have previously pushed for a 10-year moratorium on states creating their own AI regulations to prevent a patchwork of laws.

Democratic Position: "Guardrails for Safe and Fair AI"

Core Stance: AI development requires strong government oversight and "guardrails" to protect Americans from its potential harms.

Their Arguments:

  • ✓ They support the principles of the now-revoked E.O. 14110, which required safety testing and assessments of AI's impact on civil rights, consumers, and workers.
  • ✓ Federal regulation is essential to combat AI-driven misinformation, protect privacy, and prevent algorithms from deepening societal inequities.
  • ⚠️ While supporting innovation, they argue it must not come at the expense of safety and ethics. They express concern that an unregulated environment could lead to significant job displacement and other societal harms.

Legislative Strategy: Oppose deregulation and advocate for restoring and codifying the protections in the previous executive order. Many support giving federal agencies like the FTC more power to regulate the industry.


Constitutional Check

The Verdict: ✓ Constitutional

Basis of Authority:

The President's authority to issue executive orders stems from Article II of the Constitution, which grants the President executive power to manage the operations of the federal government. This order directs the actions of executive branch agencies.

Article II, Section 1 of the U.S. Constitution: "The executive Power shall be vested in a President of the United States of America."

Constitutional Implications:

Executive Power: This E.O. is a standard exercise of presidential power to set policy for and direct the activities of the executive branch. Presidents frequently use executive orders to reverse the policies of their predecessors.
Precedent: The Supreme Court has consistently upheld the authority of the President to manage the executive branch through orders, provided they do not usurp powers delegated to Congress or violate existing law.
Federalism: The order itself does not directly overstep into powers reserved for the states. However, related legislative efforts supported by the administration to create a moratorium on state-level AI laws could raise federalism questions by preempting states' rights to regulate commerce and protect their citizens.

Potential Legal Challenges:

A direct legal challenge to the President's authority to issue or rescind this executive order is highly unlikely to succeed. However, legal challenges could arise from the consequences of this order. For example, if a federal agency, following this new deregulated policy, deploys an AI system that is found to discriminate against a protected class, that action could be challenged in court for violating existing civil rights statutes.


Your Action Options

TO SUPPORT THIS BILL

5-Minute Actions:

  • Call Your Rep/Senators: Use the Capitol Switchboard at (202) 224-3121. "I'm a constituent from [Your City/Town] and I support Executive Order 14179. I urge [Rep./Sen. Name] to support policies that promote AI innovation and deregulation to keep America competitive."

30-Minute Deep Dive:

  • Write a Detailed Email: Contact members of the House Committee on Science, Space, and Technology and the Senate Committee on Commerce, Science, and Transportation to express your support for a pro-innovation, light-touch regulatory framework.
  • Join an Organization: Groups like the AI Alliance advocate for policies that support open innovation in AI.

TO OPPOSE THIS BILL

5-Minute Actions:

  • Call Your Rep/Senators: Use the Capitol Switchboard at (202) 224-3121. "I'm a constituent from [Your City/Town] and I am concerned about Executive Order 14179. I urge [Rep./Sen. Name] to support legislation that establishes strong safety and fairness guardrails for artificial intelligence."

30-Minute Deep Dive:

  • Write a Letter to the Editor: Submit a letter to your local newspaper explaining the risks of unregulated AI, such as algorithmic bias in hiring or the spread of misinformation.
  • Join an Organization: A wide range of groups are advocating for stronger AI safety standards and regulation. These include the Center for AI Safety (CAIS), the Center for AI Policy (CAIP), Americans for Responsible Innovation, and Accountable Tech.