Dive Brief:
- IBM is ready to deploy a governance toolkit for its watsonx generative AI platform, the company announced Tuesday. The watsonx.governance solution, currently in preview, will be generally available in December, the company said.
- The toolkit validates large language model inputs and monitors outputs for bias, drift, compliance and other metrics, providing users with reports and dashboard capabilities designed to meet anticipated safety and transparency regulations, according to IBM.
- “Our approach with the watsonx portfolio is to pull together our models, client models, partner models, and to have a governance wrapper around it all to make sure you understand the insights that you're getting,” Nick Otto, head of global strategic partnerships at IBM, told CIO Dive.
Dive Insight:
As government officials strain to head off AI abuses through executive orders and regulatory action, and researchers weigh the risks of broader societal impacts, technology vendors are looking to address more immediate, practical concerns.
IBM is leaning on a multimodel approach to complement its entrenched hybrid cloud strategy and help organizations overcome adoption hurdles.
“There are going to be lots of different tools and models that are going to provide different types of value for different enterprise use cases, different situations across your company,” Otto said.
The governance capabilities work across IBM Granite models, in the Azure OpenAI and AWS Bedrock suites, and throughout the company’s partnership ecosystem. It’s part of a larger suite of capabilities built into the watsonx platform, which, in addition to multiple LLMs, includes AI coding assistants and an enterprise model building studio.
“People are realizing the importance of a hybrid approach, of being able to leverage on-prem capabilities, cloud relationships and having a multimodel approach, including models you build and models you use off the shelf,” Otto said.
To enable broad adoption, use cases have to be grounded in models that can be trusted to deliver accurate insights and protect sensitive data. For regulatory compliance, and peace of mind, organizations will need visibility into a model’s data sources and its decision processes.
The only way to unlock critical use cases that go beyond the back office is to understand where insights are coming from and to ensure there’s no bias or security risk, Otto said.
Potential standards are only just gaining a foothold.
Outside of existing data-privacy regulation, intellectual-property protections and security protocols, LLM governance is largely a patchwork of best practices learned from previous iterations of the technology and unfortunate mishaps.
“Governance right now is a little bit of the Wild West — everyone's kind of building their own stuff,” Otto said. “But, I think it's going to evolve quickly and there's going to be a push to put standards in place.”