Curbing AI Disinformation Requires Innovation, Not Regulation

This op-ed was co-authored by Pablo Garcia Quint, Technology and Innovation Policy Intern at Libertas Institute, and originally appeared in Daily Caller on November 27, 2023.

As the 2024 election gets closer, our ability to flag AI disinformation seems to many to be no better than when ChatGPT hit the scene a year ago. Even with an abundance of resources in the tech industry, the necessary technological solution to catch AI content 100 percent of the time simply doesn’t exist. 

Some look to government regulation as a solution, but this isn’t a problem we can simply regulate away. Stringent AI rules will be powerless to stop bad actors from taking advantage of AI. Even an outright ban in the U.S. would create a vacuum that foreign actors would rush in to fill. As Tim Wu noted recently, we should “be wary of taking premature government action that fails to address concrete harms.”

That doesn’t mean all is lost, but practically speaking, the only way forward is through. Competitiveness and innovation have always been at the forefront of our country. AI disinformation shouldn’t be any different. Instead of wasting our energy setting up premature, inefficient, and powerless regulations, we should incentivize development in the AI detection space.

With the introduction of AI-generated material into broader online information, the capacity to quickly generate deceptive and timely disinformation is higher now than ever. This reality is clear when considering recent spikes online in AI-generated voice spoofsAI-generated deep fake videos, and AI-generated deep fake images

Fortunately, the market for AI detection is growing, and with it, a market for solutions. Google, for example, laid out a new rule to require political candidates to disclose when they use AI in political ads. Meta recently followed suit, adding disclosure requirements on AI-generated political advertisements across the company’s social media platforms, and barring political advertisers from using Meta’s generative AI advertising tools. Adobe is in the process of integrating “content credentials,” to let users decide what to believe based on the history of a traceable content’s creation. 

While industry approaches are largely based on watermarking, in which special identifiers are attached to content to trace IP addresses, smaller companies are coming up with different solutions. For instance, companies such as The New Provenance ProjectThe Content Blockchain ProjectDemocracy Notary, and many more integrate blockchain technologies into their systems. Startups like these offer a glimpse into the future of market solutions to identify AI disinformation. 

Looking further into the future, researchers are also offering a diverse set of possibilities like embedding watermarks into blockchains or applying invisible noise to images and videos to create low-quality outputs in case the images are altered. Others have proposed detecting inconsistencies in head poses and facial expressions. Moving beyond mere content detection, some studies even suggest that crowd wisdom verification and blockchain storage can be combined to create a more robust system to identify and verify disinformation.

Of course, none of these approaches is perfectly effective. Watermarking can be broken, bypassed, washed out, and even added where it doesn’t belong. Adobe’s tracking and tagging concept is still optional and thus currently does nothing to dissuade bad actors. And novel solutions, like incorporating blockchain, suffer from variations of research gaps and false positives. But the market incentives to keep getting better at AI detection are there.

Our technological capability will improve, just like any other industry. But heavy-handed regulations and executive orders are doomed to fail. Certainly, as we look to the future, long-term solutions, industry standards, and regulatory frameworks for AI will be meaningful. But in the face of a rapidly approaching challenge, what’s needed is nothing short of a moonshot, and that’s something only the market can pull off. 

About the author

Caden Rosenbaum

Caden Rosenbaum serves as the senior policy analyst leading the tech and innovation policy portfolio. As an attorney with experience analyzing laws and regulations, as well as advocating for substantive reform, his work contributed to the passage of the nation’s first portable benefit law, allowing companies to offer meaningful work-related benefits to gig workers in Utah. Caden’s diverse background in technology, innovation, and workforce policy includes many years working in Washington, DC alongside some of the country’s brightest minds at organizations like TechFreedom and the Center for Growth and Opportunity at Utah State University. Caden enjoys spending time with his wife, tending to his strawberry garden, and competing online in VR table tennis matches.

Share Post:

Fighting for a Future Where Individuals Are Fully Liberated to Pursue Their Dreams, Free from Coercion and Control.

You Might Also Like

Texas nearly sabotaged its AI gold rush—until lawmakers hit the brakes on a bill that could’ve driven innovators to California, Virginia, or worse, Europe.
Utah families aren’t just having fewer kids — they’re actively choosing different ways to educate them.
SB 165 is a significant step toward ensuring that municipal broadband projects are financially sound and transparent.

Help us Nail and Scale Policies to Reduce Government Control

Your tax-deductible contributions to Libertas Institute increase freedom across the country.