Experts and industry watchers are calling on Congress to help establish global safeguards and standards for AI use as companies have launched a race to deploy the technology.
“GPT-4 is more likely to respond helpfully and refuse harmful requests than any other widely deployed model of similar capability," OpenAI CEO Sam Altman said Tuesday during a hearing held by the Senate Judiciary subcommittee on privacy, technology and law. The hearing marked the first in a series intended to write the rules of AI.
"However, we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Altman said.
Christina Montgomery, IBM VP and chief privacy and trust officer, and New York University Professor Emeritus Gary Marcus, joined Altman in calling for swift regulation, yet are split on what form it should take. Exactly what regulations will look like and how they will impact businesses remains unclear.
“As technology advances, we understand that people are anxious about how it can change the way we live," Altman said. "We are too, but we believe that we can and must work together to identify and manage the potential downsides so that we can all enjoy the tremendous offsets."
Altman suggested Congress should create a new agency capable of awarding or revoking licenses on AI use to ensure compliance with safety standards, establish safety standards and require independent audits.
Montgomery, however, said regulation should revolve around transparency and exploitability with rules focused on risk and governed by a preexisting agency. Creating a new agency would take too much time compared to the pace of innovation and adoption, she said.
Companies should disclose the data used to train AI, the model and how it performs while making sure that there’s continuous governance, Montgomery said.
Marcus suggested Congress could implement a safety review that mimics the Food and Drug Administration’s process prior to widespread deployment, creating a monitoring agency and funding AI safety research.
There's a broad business appetite for AI, which is adding urgency to the need for regulation. Despite concerns over data governance and privacy, more than two-thirds of executives say the benefits of implementing generative AI outweigh the potential risks, according to a Gartner report published this month.
That appetite has created regulatory noise in the AI space. The White House convened a group of senior officials and technology industry experts earlier this month — including Altman and Alphabet CEO Sundar Pichai —to discuss the rise and ethical use of AI. The White House announced a public evaluation of AI systems, consistent with responsible disclosure principles, would take place this summer.
Several major AI companies, including Anthropic, Google, Hugging Face, Microsoft and NVIDIA, committed to the evaluation.
While there is no legislation requiring companies to build and deploy responsible AI models, organizations should still do so, experts said.
“The AI industry doesn’t have to wait for Congress,” Sen. Richard Blumenthal, D-CT, said during the hearing.
OpenAI made it easier for users of its public ChatGPT model to turn off chat history and export their data to understand what the tool stores. As OpenAI aims its target at enterprises, the company has made it clear that it understands business users will require more guardrails than what the public model provides.
But it’s important to remember that for enterprises, AI is not a shield, Montgomery said.
“If a company discriminates in granting credit or in the hiring process by virtue of the fact that they relied too significantly on an AI tool, they’re responsible for that today — regardless of whether they used a tool or a human to make that decision,” Montgomery said.