Imagine if we’d tried to regulate the internet in the 1990s—before we had email, online shopping, or search engines. We would’ve smothered the spark before it caught fire. That’s the risk we’re running with artificial intelligence. By rushing in with rules before the tech has matured, we could freeze progress in place, locking in limitations instead of unlocking potential.
Here’s the core of the issue: Artificial intelligence has enormous potential to streamline and improve the systems that run our lives—law, healthcare, finance. And in some cases, it’s already starting to deliver. But these are early wins, not finished products. Still, 44% of Americans polled by Overton Insights say they want strict government oversight to prevent risks and misuse—and that’s consistent across party lines. Policymakers are responding by jumping ahead of the technology, drafting rules for what AI might become instead of waiting to see how it actually develops. That premature approach could block the very progress we’re hoping AI will bring.
AI is already showing how it could boost productivity and reduce bottlenecks. Think faster court processing, real-time legal guidance, or quicker, more accurate lab results. These are just early glimpses into how AI might lift burdens on professionals and make critical services more accessible. But those outcomes require highly refined models. The technology needs time and space to grow into something reliable, scalable, and safe enough to handle high-stakes decisions.
Yet too many lawmakers are treating today’s AI tools like finished products. They see flaws—like biased image generators or chatbots giving clumsy answers—and jump straight to writing regulations. But those flaws are exactly what developers need time and space to fix—and they’re already working to do so. Just look at the rapid-fire improvements we’ve already seen. Every few months, newer versions of large language models and AI agents roll out with smarter outputs and fewer mistakes. Competition alone is pushing companies to address risks head-on or risk getting left behind.
Of course, that doesn’t mean we should ignore real harms, but let’s be clear: we’re not operating in a legal vacuum. Agencies like the Federal Trade Commission, the Federal Communications Commission, and the Department of Justice are already enforcing existing consumer protection, anti-discrimination, and fraud laws. When one company falsely advertised an AI “lawyer,” the Federal Trade Commission stepped in—the company paid damages and had to make clearer disclosures. When deepfake robocalls using President Biden’s voice in an attempt to manipulate a primary election, the Federal Communications Commission acted fast and imposed a steep fine on the culprit. These tools for oversight are already in place—and they’re working.
What we need isn’t some new blanket regulation—it’s breathing room. Developers need time to refine their models, and policymakers need time to learn which risks are real and develop targeted responses to mitigate them. Jumping the gun could freeze AI in its awkward adolescence, making it harder, not easier, to serve society’s real needs down the line.
A few ways policymakers can emphasize learning in this respect exist at the state and federal levels. For example, a way federal policymakers can ensure their regulatory proposals match real AI-related issues is by enhancing coordination between the private sector and federal programs tasked with learning, like the NIST AI Safety Institute. At the state level, Utah’s AI Innovation Lab has already been successful at identifying real policy issues and issuing clear recommendations that were relied upon to pass thoughtful, well-crafted legislation during the 2025 legislative session.
We can’t write smart rules if we don’t yet understand how the technology will truly take shape. But that clarity is coming, especially as AI begins to take root in core systems like hospitals, banks, and courtrooms. That’s when research-backed regulation will matter most.
Until then, policymakers should hold off on regulating based on half-formed fears and hypothetical risks. Because if we slam the brakes too soon, we won’t just slow down innovation, we’ll miss the destination entirely.