AI Regulation

Regulators Tying the Hands of Innovation in the Face of an AI Race


This article was co-authored by Pablo Garcia Quint, a Technology and Innovation Policy Fellow at Libertas Institute.

The Risks for Llama 3.1 and Other open source AI Models

As the first frontier-level open-source AI model, Llama 3.1 has the potential to revolutionize the industry by democratizing access to cutting-edge AI technology. Compared to its closed-source counterparts, Llama 3.1 offers superior cost-efficiency while simultaneously stimulating a vibrant ecosystem. However, this promising trajectory is threatened by ill-conceived and preemptive regulations. Just as AI developers are beginning to find their footing, lawmakers and regulators are jumping the gun by proposing a complicated web of misguided and conflicting laws.  

This year alone, the states introduced 673 AI-related bills across 45 states, 72 of which have been enacted into law. Of particular note are the proposals out of California and Hawaii. Federal legislation and regulations such as the Department of Commerce Rule, the Biden Executive Order (EO), and various bi-partisan bills in Congress pose serious risks to innovation.

With vague, all-encompassing definitions, punitive penalties, and a lack of consensus across different government agencies, the legal landscape for AI development has quickly become a convoluted minefield of unnecessary risk and regulatory costs. 

What’s to come from this regulatory posture where prevention of harm is more important that innovation? At best it will create a difficult environment for AI developers and small entrepreneurs to operate in. At worst, innovation and R&D will move abroad, leaving the U.S. behind in the AI race, and powerless to set regulatory standards at all.

California and Hawaii’s overreach will kill AI innovation and force developers out.  

Proposals from states like California and Hawaii push for sweeping regulations with heavy-handed oversight and liability for AI developers. California’s SB 1047 is a prime example. The first version of the bill called for a new regulatory body called the Frontier Model Division (FMD), which would have had sweeping authority to enforce endless safety checks, third-party audits, and fines up to $30 million. The FMD would have also had the power to rope small startups into its regulatory web by lowering oversight thresholds at will. 

Much of this problematic language was struck following outrage from the tech industry and bipartisan criticism from Members of Congress. Yet as critics and many others have pointed out, even after amendments the bill would still dictate a barrage of safety checks, third-party audits, and fines up to $30 million. The latest version also sets an arbitrary $100 million threshold for AI model development, and a $10 million threshold for developing derivatives, effectively preventing developers from collaborating on an existing open-source AI model and discouraging any further innovation. This would hinder AI progress broadly by snuffing out development in Silicon Valley – the hub of technology talent.

California’s SB 1047 is now on Governor Gavin Newsom’s desk, with many urging a veto to prevent regulatory overreach that would harm California’s position as a leader in tech and innovation. The Governor’s decision is not certain, but the bill passed the State Assembly with a vote of 41 to 9, and the Senate with a vote of 29 to 9, signaling that the Governor will likely sign it into law anyway. 

Hawaii’s proposed bill would go even further, forcing all developers large and small to seek permission from the newly created Office of Artificial Intelligence Safety and Regulation to begin development, and again every time an update is made – even updates that patch safety issues and security flaws. For open-source AI models like Llama 3.1, where updates and iteration are central to the developer community, the resulting backlog of applications to Hawaii’s regulatory authority would almost certainly stall meaningful development within the state. 

Federal AI rules create conflicting mandates that burden companies.

At the federal level, the administrative state has unsurprisingly been the first to move, tending toward a regulatory state that’s both restrictive and ineffective. The White House first set out a directive for executive agencies in President Biden’s AI Executive Order (EO). The EO relies on authority from the Defense Production Act to impose technical requirements and mandatory disclosures on developers – a time-tested mistake in public policy.

In response to the EO directive, the Department of Commerce proposed rules requiring all Infrastructure as a Service (IaaS) providers to verify and report the identities of their customers – hoping to weed out foreign actors abusing technology like AI to weaken US infrastructure. However, as many organizations stated in response, such a requirement would be impossible to do in light of state and global data protection laws prohibiting that level of access to customer identification. In this sense, the DoC Rule would present companies with a Catch 22: Either a company pays hefty penalties for complying with the DoC Rule and breaking data protection laws, or company officers can risk jail time along with hefty penalties for failing to comply with the DoC Rule. 

U.S. Department of Commerce.

Congress is considering multiple legislative proposals related to AI. H.R. 8756 aims to oversee AI by creating an authority at every federal agency, H.R. 8315 aims to prevent foreign adversaries from exploiting AI technology from the U.S. through export controls, S.B. 4862 aims  to ensure that “new advances in artificial intelligence are ethically adopted to improve the health of all individuals, and for other purposes,” and S. 2770 along with S. 3875 would require disclaimers in political ads when politicians use AI. 

At best, these legislative and regulatory proposals are well-intentioned but misguided, rooted in fear-based responses to the unknown. At worst, this flavor of regulatory action is rooted in a broader desire to regulate tech companies into oblivion. Either way, overly restrictive mandates and prohibitions like these are bound to stifle innovation and hinder our ability to remain a leader in the AI race. . Investment in small businesses would be cut nearly in half, and market share would only become more concentrated as only large companies like Apple, Meta, Google, and Microsoft would be capable of withstanding these onerous regulatory burdens – similar to the outcomes of the EU GDPR since 2018. 

The best move for open-source AI? Get the government out of the way.  

In contrast to the crushing costs of regulatory proposals, the value of open-source AI far outweighs any fears regulators may be responding to. According to researchers at Harvard Business School, without open-source software, firms would spend 3.5 times more on software alone. Moreover, open-source artificial intelligence offers superior security through transparency, enabling constant scrutiny and ongoing enhancements. This continuous improvement fosters more secure systems, similar to how Linux became the industry standard by combining advanced features with greater security than its closed-source rivals.

Open-source software also offers significant flexibility perks, meaning businesses aren’t tied down to one vendor through a contract or cost commitment. They have the flexibility to choose what works best for them instead. This flexibility is amplified by the global open-source community, which offers a broad and diverse array of developers driving innovation and ensuring that the software evolves to meet the changing needs of its users.

As states, federal agencies, and Congress weigh in, without regard for the impact on developers, the future will undoubtedly be marred by excessive regulation. If we want open-source AI models like Llama 3.1 to thrive, collaboration to abound, and small businesses to flourish, we need to ditch the suffocating grip of regulation, and let innovation flow.