If you believe the people building today’s most powerful artificial intelligence systems, the stakes couldn’t be higher. OpenAI’s Sam Altman warns that AI could help design biological weapons. Elon Musk has called it “more dangerous than nukes.” These aren’t fringe critics. They’re the builders of the technology itself—and they’re sounding the alarm.
That makes regulating AI feel like a no-brainer. Yet we still have no comprehensive federal law governing it. Worse, recent attempts by the Trump administration sought to block states from passing their own AI regulations and to penalize those that tried.
This gets things exactly backwards.
The right question isn’t whether to regulate AI federally or at the state level. It’s how to do both—together.
First: the federal government must lead.
A strong national floor—minimum safety, transparency, and security standards—is essential. Much like the FAA did for aviation in 1958, a federal agency for AI could protect the public while also promoting innovation. That agency could certify high-risk models, stress-test new systems, investigate accidents, and publish safety audits—giving both developers and users clear expectations. Done right, it would increase public trust and unlock more investment and adoption, not less.
But states must be allowed to build on that floor.
Justice Louis Brandeis called states the “laboratories of democracy,” and AI is exactly the kind of experiment that needs many labs. This technology is advancing faster than any single agency can fully grasp. We simply don’t yet know what effective AI regulation looks like. Letting states test different approaches—on data privacy, algorithmic bias, consumer protection, even employment policy—is our best shot at finding out.
We’ve done this before. From environmental protection to health insurance marketplaces, federalism has allowed for bottom-up policy innovation that national regulators later adopted or refined. AI is no different. Federal rules ensure no one falls through the cracks. State rules help us figure out what excellence looks like.
The argument that regulation will slow us down doesn’t hold. The real risk isn’t falling behind—it’s racing forward blindly. And the public knows it. Pew reports that a majority of Americans are more concerned than excited about AI. Most think the government isn’t doing enough. Confidence in current oversight is lower in the U.S. than almost anywhere else.
We should treat AI with the seriousness it demands. That means regulating it—not smothering it, but shaping it, learning from it, and preparing for a future none of us can fully predict.
If the technology really is that dangerous, letting the same people who built it decide all the rules is the most dangerous choice of all.
Read my full article on Bloomberg Opinion here: https://lnkd.in/eBEBpjWu