The New Landscape of U.S. Tech and AI Task Forces

This article was co-authored by Tech and Innovation Research Fellow Pablo Garcia Quint.

The U.S. regulatory code often shifts to reflect the priorities of the administration in power, affecting industries across the board. Artificial Intelligence (“AI”), which is quickly revolutionizing our world, is no exception, experiencing a bipolar mix of regulatory responses at the state and federal levels. 

The back-and-forth nature of AI regulation is especially pronounced at the federal level. The current Trump administration has indicated its preference for a hands-off approach, aiming to remove barriers to AI adoption. In contrast, this hands-off approach reverses the Biden administration’s preference for precautionary measures that emphasized pre-market oversight and preemptive regulation. 

In general, regulatory efforts can either guide innovation or constrain it. That’s why the real challenge for policymakers is finding a middle ground informed by objective fact-finding, stakeholder engagement, and sound regulatory frameworks. A time-tested method of accomplishing this is through task forces, which is why, despite their differences, both administrations relied on them to evaluate how AI should be regulated and implemented across government and society. Yet, the marked difference in the regulatory outcomes of these task forces demonstrates a clear contrast in direction between those adhering to an innovation-first principle and those who rely on the precautionary principle. 

This distinction is even clearer in the states, our “laboratories of democracy,” where variations in the scope and authority of these task forces have made all the difference in either pushing away or paving the way for tech investment in their state. A few states, like Utah, have been notably successful in translating task force findings into sound legislation. Others, such as California, keep defaulting to bureaucratic reflex despite sitting in tech’s backyard. 

We’ve seen this same tension play out as administrations shift between market-driven and precautionary approaches to AI governance. Overall, what emerges from this ongoing policy evolution are clear lessons about what makes task forces effective versus what turns them into expensive exercises in missing the point.

Federal Task Forces from the Trump Administration

The Trump Administration set an early tone of techno-optimism and deregulation in its approach to tech and AI. Going back to 2017, the White House established the American Technology Council by executive order, aiming for federal IT modernization and advising on tech policy across government​. It signaled the administration’s intent to bring Silicon Valley-style innovation into government, emphasizing efficiency and partnership with industry over bureaucracy.

By 2018, the Trump administration shifted its focus to AI, forming a federal task force to coordinate national R&D efforts. In 2019, it launched the American AI Initiative to accelerate innovation across the government.

AI trade to accelerate under Trump.

This initiative directed agencies to prioritize AI investments, expanded research fellowships, and sought to remove barriers to AI experimentation​. In practice, it doubled U.S. non-defense AI R&D funding over two years and established multi-agency research institutes in partnership with academia​. The emphasis was clear— keep America’s AI leadership through innovation, not regulation. As a 2020 Office of Management and Budget memo put it, “fostering innovation and growth in AI is the government’s top priority” – agencies were explicitly instructed to avoid any rules that would stifle AI progress. Instead, they were encouraged to use light-touch approaches like voluntary guidelines, pilot projects, and industry collaboration to address AI risks.

In the waning days of Trump’s first term, this pro-innovation philosophy was codified into lasting policy. In December 2020, the White House issued guidance for “Trustworthy AI” in government, requiring agencies to adopt AI ethically while eliminating unnecessary regulatory hurdles​. And in early 2021, bipartisan legislation that Trump had signed – the National AI Initiative Act – took effect, creating a coordinated framework for AI across the government. That law established a National AI Initiative Office in the White House and authorized an external National AI Advisory Committee, ensuring the momentum for AI innovation would carry into the next administration​. Notably, even out of office, Trump’s influence on tech policy persisted. 

In this return to office, AI would have already been commercialized and popularized even more, which fueled a renewed focus on AI-driven growth. In April 2025, President Trump signed an executive order aimed at revolutionizing tech education. This order created a White House Task Force on AI Education, recognizing that nurturing home-grown AI talent is key to long-term innovation​. The task force’s mission is to weave AI into the fabric of American education. It is charged with launching a nationwide “Presidential AI Challenge” to reward student and teacher innovations in AI, forging public-private partnerships to get cutting-edge AI tools into K-12 classrooms, and expanding AI training for both students and educators​. 

This task force is still in its early days, but its creation alone underscores a consistent theme from the Trump camp— the government’s intention to assess the feasibility of AI application across the board. From modernizing IT systems to reimagining STEM education with AI, these efforts favor collaboration with industry and academia to accelerate tech advancement, rather than heavy-handed regulation.

Biden-Era and Bipartisan AI Initiatives

When Biden took the reins on AI policy, the task forces kept rolling—but the energy shifted. What began under Trump as a push for bold, market-driven innovation morphed into something slower, more cautious, and ultimately less impactful.

The National Artificial Intelligence Advisory Committee (NAIAC), established in 2022 under the National AI Initiative Act, was a chance to keep the U.S. ahead of the global AI race. Instead, it focused on governance frameworks and AI “ethics” discussions. Its 2023 report leaned heavily on bureaucratic themes like transparency, public trust, and rights protections—buzzwords that signal a drift toward compliance-first policy.

Then came the Blueprint for an AI Bill of Rights, a 2022 framework that promised fairness, accountability, and privacy in AI systems. But it wasn’t binding. It offered no roadmap for builders—just vague guardrails based on political optics, not engineering realities.

In 2023, the White House convened top tech firms—OpenAI, Google, Amazon, and others—for a round of voluntary commitments. These companies promised to watermark content, test models for bias, and share safety practices. But without real policy or incentives, these were little more than performative pledges.

By fall, Biden dropped a sweeping Executive Order on Safe, Secure, and Trustworthy AI. It tasked agencies with setting security standards, monitoring algorithmic harms, and managing AI deployment across the federal workforce. The task forces across federal agencies were trying to get ahead of all potential dangers AI posed. While the EO was preemptive in its tone, it was also cautious and created things like the National Institute of Standards and Technology (NIST) AI Safety Institute.

NIST’s role as lead on developing technical standards for AI was a smart call. Unlike most DC bureaucracies, NIST actually engaged with builders. Their AI Risk Management Framework was voluntary, flexible, and built in partnership with the private sector. In a sea of process-heavy policies, this was a rare win as it provided real guidance without regulatory overkill.

Still, the overall message was unmistakable. Biden’s AI task forces prioritized risk over reward, caution over competition. While Trump’s task forces looked outward—toward growth, speed, and capability—Biden’s looked inward, emphasizing audits and “trust.”

By April 2025, most of the Biden-era efforts were inherited, revised, or quietly sidelined by the returning Trump team. 

State-Level Task Forces Driving Local Innovation

Across the U.S., states are stepping up with their own tech-focused task forces—some focused on enabling innovation, others falling back on tired regulatory reflexes. The contrast is sharp, and the stakes are real.

Take Texas, where Governor Greg Abbott’s Artificial Intelligence Advisory Council embodies the state’s light-touch, liberty-conscious ethos. Established by HB 2060, the council is working to keep Texas at the forefront of tech while ensuring AI use in government is ethical and transparent. It’s a “let’s get it right without getting in the way” model—one that doesn’t default to distrust of new tools or chase headlines with bans.

But the real standout is Utah. While other states flirt with old-school regulatory frameworks, Utah rewrote the playbook. In 2024, it passed the first-in-the-nation Artificial Intelligence Policy Act, establishing an Office of Artificial Intelligence Policy and launching “AI Learning Labs”—sandbox-style partnerships where startups and policymakers co-develop smarter rules in real-time. It’s pragmatic, pro-innovation governance grounded in the belief that policy should evolve alongside the tech, not try to shackle it. A complementary bill aimed at building a task force on AI in education, HB 168, didn’t pass but drew rare bipartisan praise – signaling that even in divided times, smart AI education policy is politically viable.

That’s not just rhetoric. Utah backed it with $100 million for a new AI research hub linking universities and private firms. Governor Cox is actively recruiting international AI partnerships—like a recent trip to Canada’s Mila Institute—while also exploring sector-specific guidance for high-impact areas like mental health. The point isn’t to regulate everything, but to address real harms without slowing down progress. 

AI Innovation at Utah AI Summit 2024.

Meanwhile, California—long considered America’s tech capital—continues to prove that proximity to innovation doesn’t guarantee policy sanity. Governor Gavin Newsom vetoed a bill that would’ve imposed sweeping AI regulations, and while he earned some credit for resisting a regulatory overreach, his administration quickly pivoted to a task force strategy that still drips with bureaucratic caution. California’s “Frontier AI Task Force” may sound innovative, but it’s more about containing tech than catalyzing it. The group’s early recommendations involved third-party audits, mandated disclosures, and whistleblower protections, among other restrictive and unrealistic rules for developers. Newsom’s rhetoric about “humble” policymaking doesn’t mask the fact that California continues to treat innovation as a problem to be managed rather than a force to be unleashed. As more regulation pours in, the more its innovation edge dulls—and other states are taking notice.

U.S. AI policy is split, with states like Utah that back innovation through freedom and partnerships, while on the other hand, states like California stick to caution and control. Only one vision matches the speed of the AI era. 

Recommendations for future task forces

As both parties debate AI policy, they should keep one goal in mind: making the U.S. a global benchmark for AI regulation and innovation. Regardless of the tools, be it regulation or government incentives, the focus should be on smart, forward-looking policy.

AI task forces play a critical role in this effort. They help evaluate both the benefits and risks of the technology and can recommend a range of responses, from restrictive safeguards to innovation-friendly guidelines. 

That’s why how we design and operate these task forces matters. Their structure and approach will shape the impact they ultimately have. Effective task forces should:

  1. Sustain Innovation with Balanced Oversight

Looking ahead, sustaining momentum in AI development will be critical. A lighter regulatory touch can spur innovation, but task forces must also evaluate how AI is applied in sectors that directly affect people’s lives, such as education, healthcare, and finance. Smart oversight doesn’t mean stifling progress. It means guiding it with evidence, precision, and adaptability.

  1. Rely on Existing Legal and Institutional Frameworks

Rather than rushing into sweeping new regulations, AI governance should build on existing legal frameworks and the expertise of agencies already engaged in oversight. Institutions like the AI Safety Institute (AISI) are well-positioned to assess risks, set standards, and advise on harms that have actually materialized, not speculative ones. Where regulation is necessary, these entities should lead the way through iterative evaluation, not blanket prohibitions.

  1. Take a Strategic, Modular Approach

Scholars like Will Rinehart emphasize the need for strategic patience in regulation. AI governance should evolve with the technology itself. Rather than rush to regulate, governments should test, learn, and adapt. That means supporting experimentation, embracing open-source development, and encouraging decentralized problem-solving from the free market before jumping to assessing projects with task forces. 

  1. Question the importance of Task Forces 

Task forces play a valuable role, but they cannot anticipate or resolve every possible application or implication of AI. Rather than relying on costly, potentially ineffective legislation or redundant initiatives, policymakers should begin by asking the right questions. The Abundance Institute, for example, poses key inquiries: Is this truly an AI-specific issue? Will the proposed legislation support an open and dynamic industry? Have existing legal and regulatory tools already addressed the specific harm in question

  1. Test Ideas, Don’t Freeze Them

Ultimately, task forces should encourage experimentation, iterative policymaking, and real-world feedback. Suggesting vague guardrails or preemptive regulation as a result of a task force should be a last resort, and it is often more detrimental than beneficial.

Author Profile Image
About the author

Devin McCormick

Devin McCormick is the Technology and Innovation Policy Analyst at Libertas Institute, where he applies his diverse experience spanning tech sector equity trading and advanced AI/ML solutions. Before joining Libertas as a policy analyst, Devin developed strategic technologies at the State Department and interned at the Libertas Institute during the 2024 legislative session. A graduate of the School of Global Policy and Strategy (GPS) at UC San Diego, Devin holds a master’s degree that complements his bachelor’s in International Affairs from Florida State University. His academic and professional journey is further distinguished by his service as an Officer in the U.S. Navy Reserve. Driven by a commitment to integrate technology with sound policy, Devin joined Libertas to advocate for policies that harness technological innovations for societal benefit. Outside of his policy work, Devin enjoys staying active and exploring the great outdoors.

Share Post:

Fighting for a Future Where Individuals Are Fully Liberated to Pursue Their Dreams, Free from Coercion and Control.

You Might Also Like

States like Utah and Colorado are quietly rewriting the rulebook on how workers can survive—and thrive—in the age of AI and automation.
Could it be that a new federal school choice program, which looks like a win for parents, is actually a Trojan horse for more federal control over education?
Could a Utah rail case unlock the future of nuclear energy across America? — A Supreme Court ruling just changed the game.

Help us Nail and Scale Policies to Reduce Government Control

Your tax-deductible contributions to Libertas Institute increase freedom across the country.