Landscape

AI Regulation

The US is developing a fragmented patchwork of 40+ state AI laws while the Trump administration pushes for federal preemption — creating a compliance minefield for AI developers and unclear legal terrain for deployment.

Created Apr 27, 2026·Updated Apr 27, 2026

Overview

No comprehensive federal AI law exists in the US as of 2026. Over 40 states have enacted their own AI regulations or are actively considering legislation, producing a patchwork of overlapping, sometimes contradictory requirements. The Trump administration is working to preempt state laws in favor of national standards, but Congress has not yet acted.

Federal Position

The Trump administration opposes state-by-state AI regulation. In December 2025, an executive order was signed discouraging state legislation — specifically targeting laws that could "stifle innovation" or anti-bias regulations perceived as having a political slant. The order threatens to withhold federal funds from states that pass or enforce "onerous" AI laws. In March 2026, the White House issued guidelines for federal legislation supporting protections for children and controls on electricity price hikes from AI data centers.

State-Level Activity

More than 1,500 AI bills were under consideration across states as of early 2026 (per Multistate.ai), alongside 100+ existing enacted laws. Key states:

California — Most comprehensive AI regulatory regime in the US:

  • Developers of advanced AI models must assess catastrophic risks and report serious safety incidents
  • LLM providers must prevent chatbots from discussing self-harm or sex with minors, and remind users they're chatting with AI
  • Starting August 2026: large tech platforms must apply invisible watermarks to AI-generated output
  • March 2026 executive order: AI tools used by the state must protect privacy, support civil rights, mitigate bias

Colorado — Sweeping 2024 law taking effect July 2026:

  • Requires "developers and deployers of high-risk AI systems" to protect consumers from algorithmic discrimination in education, employment, finance, healthcare, and housing
  • Developers must document limitations, training data, and mitigation efforts; deployers must assess impact annually and notify consumers when AI makes consequential decisions
  • Under pressure from businesses to relax annual assessment and other requirements

Minnesota:

  • 2023: prohibited deepfake election interference
  • August 2026: prohibits health insurers from using AI to deny care without physician review
  • Pending: ban on AI removal of clothing from photos; dynamic price-setting based on personal behavior

New York:

  • Starting January 2027: model makers with revenue >$500M must implement strict protocols against bioweapon or autonomous hacking tool creation; must audit annually and report incidents promptly

Ohio:

  • March 2026: prohibits AI replication of voice/likeness to sell products or produce intimate images without permission
  • Pending: deny AI systems legal personhood; ban AI-coordinated retail/rental price-fixing

Utah:

  • 2026: several bills refining its 2024 Artificial Intelligence Policy Act
  • Pending: prohibits nonconsensual sexually explicit deepfakes; prohibits health insurer AI denials without doctor input
  • Allows AI companies to apply for temporary regulatory relief while testing new technology

Common Regulatory Themes

Across states, similar concerns drive legislation:

  • Child protection — restricting AI chatbot access, preventing exploitation
  • Health decisions — requiring physician review before AI denies insurance claims
  • Deepfakes and synthetic media — nonconsensual intimate imagery, election interference, voice/likeness theft
  • Algorithmic discrimination — in high-stakes domains (employment, housing, credit, healthcare)
  • Transparency — watermarking AI-generated content, notifying users they're interacting with AI
  • Catastrophic risk — reporting requirements for frontier model developers

Compliance Implications

A given AI model may need to pass a bias audit in Colorado, apply watermarking in California, and meet reporting thresholds in New York — all while the federal government may preempt those same requirements. This jurisdictional conflict:

  • Increases the cost of building and maintaining AI systems
  • Adds legal risk to deploying new applications
  • Creates uncertainty about which requirements actually apply
  • Risks federal funding cuts for state-compliant companies

Open Questions

Whether Congress will pass comprehensive federal AI legislation — and whether it will preempt or complement state laws — remains unresolved. The EU AI Act provides one model for national comprehensive regulation; US legislators have drawn on it in drafting proposals but no federal law has advanced.

Sources

  • "Big Pharma Bets Big on AI" — Andrew Ng / deeplearning.ai (newsletter, Apr 2026) (link) — US state AI law overview, federal preemption efforts, compliance landscape