Within the House Energy and Commerce Committee’s new budget reconciliation bill lies an alarming provision: a decade-long moratorium on state regulation of artificial intelligence.

The proposed ban is extraordinarily broad, prohibiting states from enforcing new or existing laws. Sen. Ted Cruz (R-Texas) has announced he will soon introduce similar legislation in the Senate. If adopted, it would extinguish the only meaningful effort to protect Americans from AI-related risks.

In recent years, states have quietly become the front line against technology’s potential harms. From data privacy to children’s online safety, the federal government has failed to act. States have stepped in to fill that regulatory gap, with dozens passing meaningful legislation to protect the public, particularly when it comes to AI.

Since ChatGPT’s release two and half years ago, Congress has puttered, backtracked and ultimately produced little AI regulation. States, on the other hand, took the lead.

Colorado passed broad rules on algorithmic discrimination in high-risk models, Tennessee regulated the use of artists’ likenesses in AI replicas and Wisconsin now requires that campaigns label political ads that use AI. Those regulations are just a sampling: As we detail in our annual report on state technology policymaking, 41 states enacted 107 pieces of AI-related legislation last year.

Notably, for the past several years, the push for state-level AI regulation has been bipartisan. Democratic and Republican legislators across every state have introduced new AI regulation — and in some cases even partnered on bills. These lawmakers are responding to constituents across the political spectrum, a majority of whom are concerned that regulation of AI will be too lax.

This year, existing state proposals already seek to address some of the most serious emerging harms posed by AI. For example, there is a growing movement to require that medical insurance companies have humans reviewing decisions made by AI models denying insurance claims.

Other laws would require that companies disclose when AI models are used to deny coverage, deny a mortgage or inflate the price of a housing rental.

AI regulation is not without issues. Many proposals are misguided; some would probably create regulatory burdens and impair American innovation and competition. Given the strategic importance of the sector for the economy and national security, concerns about overregulation are warranted. But lawmaking is stronger when the states help generate new ideas for what and how to regulate — and then test them.

Regulation is hard, especially for new technologies. Policies often have unintended consequences: In some cases, state environmental regulation has worsened the housing crisis, and age verification laws have inadvertently benefited noncompliant, foreign-based pornography sites. AI regulation will bring similar pitfalls.

Such challenges are exactly why states play a critical role; they can surface new ideas and produce important evidence that informs federal policymaking.

For example, several weeks ago, Congress passed the Take It Down Act, which criminalizes the sharing of nonconsensual authentic and computer-generated intimate imagery.

But the federal legislation was undoubtedly buoyed by state efforts; dozens of states had debated and enacted similar laws in the past five years.

State AI regulation is hardly perfect.

State lawmakers must research, write and negotiate complex policy issues with fewer resources than their federal counterparts.

The disjointed nature of state-level regulation means citizens are granted varying rights and protections across jurisdictions. Plus, companies find it easier to comply with uniform laws rather than navigating dozens of regulatory environments.

Hence, we are not arguing that regulation should be left to the states alone. But Congress is moving to undermine the only concerted legislative effort aimed at balancing AI’s myriad risks and benefits without offering federal legislation in its place. Given the speed at which companies are developing increasingly powerful AI systems, the resulting regulatory gap is especially concerning.

The proposed moratorium on state AI policymaking would be far worse than Congress’s inaction on the issue.

States play an essential role in AI regulation — eliminating that will leave Americans more exposed to risks and with fewer avenues to influence AI policies. And by denying states the ability to generate and test new ideas, this ban would undermine the quality of federal AI regulation, if we ever get it.

Scott Brennen is director of the NYU Center on Technology Policy. Zeve Sanderson is executive director of the NYU Center for Social Media & Politics.