Virginia Governor Glenn Youngkin vetoed AI-focused House Bill 2094 Monday, which proposed requirements for the development, deployment and use of high-risk AI systems and laid out penalties for noncompliance. The state-level legislative move is the latest sign of a push toward minimizing government oversight as the technology develops.
Youngkin characterized the bill as a burdensome regulatory framework that would undermine progress and stifle economic growth.
“The role of government in safeguarding AI practices should be one that enables and empowers innovators to create and grow, not one that stifles progress and places onerous burdens on our Commonwealth’s many business owners,” Youngkin said, explaining the veto decision. “This bill would harm the creation of new jobs, the attraction of new business investment, and the availability of innovative technology in the Commonwealth of Virginia.”
The U.S. Chamber of Commerce urged Youngkin to veto HB 2094 for several reasons in a letter sent last month. The Chamber’s message aligns with the federal government’s shift to deprioritize new AI-specific laws.
“Existing laws and regulations already cover many AI activities,” the Chamber said. “The failure to follow such an approach could create duplicative, conflicting or burdensome regulations that would diminish the benefits of AI for consumers and the overall economy.”
Business leaders are keeping an eye on the evolving AI regulatory landscape, especially as states continue to shape proposals that could ultimately impact enterprise deployment plans.
Lawmakers introduced AI-related bills in 45 states last year. Nearly 100 of the 635 proposed bills were enacted, according to legislative tracker MultiState. There have been around 921 bills being considered this year already, roughly 10 per day. KPMG’s analysis puts the number closer to 800.
“This represents an unprecedented amount of legislative activity for any emerging technology and it threatens to create a confusing patchwork of regulatory policies that will entail enormous compliance costs, especially for smaller technology entrepreneurs,” Adam Thierer, senior fellow at R Street Institute, said in an email.
The Chamber of Progress, an industry coalition backed by dozens of tech companies, conducted an economic analysis in February that found complying with HB 2094 could cost Virginia’s AI innovators $290 million.
“Virginia has the highest concentration of tech talent in the country, and the growth of emerging technologies like AI is key to the Commonwealth’s economic future,” Chamber of Progress Northeast State and Local Government Relations Director Brianna January said in a statement Monday.
A broader shift
The Virginia veto is the latest signal of a growing trend of exercising a governance style that minimizes regulatory burdens on developers.
Shortly after taking office in January, the Trump Administration revoked past AI policies and tasked federal agencies with removing obstacles to AI innovation as its federal AI oversight approach took shape. Vice President JD Vance took the messaging abroad during the AI Action Summit in Paris last month, advocating for “pro-growth” AI policies and warning of excessive regulation.
There are signs that the shift in approach has trickled down into the states.
Earlier this month, Texas lawmakers introduced a revised version of the Texas Responsible AI Governance Act, which substantially lessened the requirements for private sector companies found in the original.
“While the new TRAIGA addresses concerns about AI’s potential to discriminate, it does not place requirements on any party to actively ensure that AI is not producing discriminatory effects,” Kim Miller, associate general counsel at MultiState, said in a blog post. The bill prohibits specific use cases that could cause discrimination, such as systems that encourage someone to harm themself or others.
Despite a looser regulatory environment, business leaders should keep in mind that existing laws still apply to AI. Youngkin highlighted the existing legal framework around potential AI risks, as did the Chamber in its letter to him in February.
“There are many laws currently in place that protect consumers and place responsibilities on companies relating to discriminatory practices, privacy, data use, libel and more,” Youngkin said in his veto letter.