The Bipartisan Senate AI Working Group released its long-awaited policy roadmap for AI Wednesday after spending the past year on a listening tour and holding insight forums.
“Congress can't and won’t solve every challenge AI presents today, but with this policy roadmap, we now have a foundation necessary to propel America into the age of AI,” Senate Majority Leader Chuck Schumer, D-N.Y, said Wednesday, speaking on the Senate floor as one of the group’s leaders.
The 31-page roadmap includes support for a comprehensive federal data privacy law, recommends non-defense AI innovation investments of at least $32 billion per year and offers several other policy suggestions regarding workforce training, language gaps in existing laws and standards for AI use.
The AI Working Group recommended incentivizing businesses to integrate new technologies and bring reskilled employees into the workplace. It also called for lawmakers to motivate blue- and white-collar employees to seek AI training from community colleges and universities.
The roadmap isn’t strong enough, critics say — acting as a guide for Senate committees, rather than specific legislation. But it is one piece of the puzzle for lawmakers as they aim to regulate the nascent technology.
The Software Alliance, also known as BSA, welcomed the bipartisan roadmap and its call for action on a comprehensive privacy law to protect personal data nationwide, according to a statement Wednesday. The group, whose members include Atlassian, Adobe, Microsoft and Zoom, said the roadmap’s rollout should “provide an impetus for action on legislation.”
Concerns over scope and litigation risks
The idea of another framework fell flat with some advocacy groups, which criticized the scope of the roadmap released Wednesday as well as the close partnership between big tech companies and the government.
“The report itself is most concrete in offering a roadmap for industry priorities while merely hand-waving toward many of the most pressing challenges associated with the widespread adoption of AI,” Nicole Gill, co-founder and executive director of big tech watchdog Accountable Tech, said in a statement Wednesday.
In the past year, tech CEOs have frequented Washington to offer voluntary commitments to safety pushes and risk management initiatives. They’ve also appeared in subcommittee hearings and provided advice on defending critical infrastructure from AI-powered attacks.
“Rather than act on the mountains of evidence and ideas about what to do, momentum to regulate AI was instead diverted into a closed-door industry-dominated process,” Amba Kak and Sarah Myers West, co-executive directors at AI Now Institute, said in a statement Wednesday. “The long list of proposals are no substitute for enforceable law.”
As the U.S. shapes its approach to regulation, lawmakers abroad have started to act. The EU’s AI Act is set to roll out in a phased approach through 2027, with initial efforts dating back to 2021.
Even without federal regulation in the U.S., using AI can have consequences.
“In the absence of regulation, we get litigation,” Forrester Senior Analyst Alla Valente told CIO Dive.
Several authors and news organizations have named OpenAI, and its partner Microsoft, in a lawsuit regarding copyright and IP violations. Music publishers and artists have taken similar action against generative AI vendors, including Midjourney and Stability AI.
“Without regulations, the litigatory landscape will become financially burdensome, resource intensive and too risky for enterprises to fully execute on their AI strategy,” Valente said in a co-authored report in February.
Other industry experts have echoed the sentiment. Gartner predicts enterprise investments to curb copyright infringement — and avoid IP loss — will hurt ROI and slow generative AI adoption by 2026.