Amid vendor-led pressure to adopt generative AI, CIOs say they aren’t rushing to embed the technology into every inch of the tech stack. First, executives want to check the facts — no matter how hard providers push for speed.
Providers' desire for speedy enterprise adoption is palpable. Generative AI has been embedded in popular CRMs, ERPs, software development tools and IT support solutions.
“I can just feel from the vendor community right now enormous amounts of pressure,” said Jason Strle, CIO at Discover Financial Services, told CIO Dive. “It’s now showing up not just in how they’re trying to engage with us, but also how generative AI is mentioned.”
While early trends indicate enterprise interest in AI is at an all-time high, CIOs can keep the upper hand in vendor conversations by developing a clear understanding of their organization’s risk appetite, auditing provider claims and leveling unnecessary hype.
Generative AI dominates industry conferences, update cycles and future plans. In the hunt for enterprise customers, some vendors have taken less-than-savory approaches, such as AI washing, overstating the technology’s capabilities and using the word AI as a marketing gimmick.
These tactics can open vendors up to regulatory and legal scrutiny. They also raise red flags for CIOs interested in modernization.
“A lot of vendors may think claiming that their product is leveraging some sort of AI capability is helpful, but it’s not always,” Strle said. “As soon as we believe that there may be a nondeterministic element to their solution, we then have to go through additional risk management steps based on our own policy.”
The hype can backfire, leading customers to further question vendor claims.
“When they’re throwing around these terms, they’re inadvertently signing up for a lot more scrutiny from us,” Strle said.
How tech leaders are pushing back
Executives must put the buzzing vendor field to the test, identifying the safest applications of the technology — and the ones that add more value.
“A lot of CIOs and organizations generally have gotten so enamored with what they believe AI is capable of doing that they have a false sense of security,” said Chris Novak, senior director of cybersecurity consulting at Verizon and advisory board member of the Cybersecurity and Infrastructure Security Agency.
The misconception lies in thinking AI automatically enhances processes.
“If you add hot sauce to a cake mix, it doesn’t necessarily make it better,” Novak said.
Finding the balance between keeping the business safe but still innovative is a struggle CIOs know all too well. They know ignoring disruption is not a viable option either.
Risk tolerance varies among organizations, but having a clear process for evaluating tools, systems and their potential impacts is crucial.
Copyright indemnities, which offer conditional protection for customers using generative AI tools, became a popular addition to vendor contracts as questions around ownership persisted. Enterprises are split on their effectiveness.
“There are some that are saying, ‘Yeah, the indemnification clauses, they don’t really do anything and we’re accepting the risk that if there’s something going on, we will be able to back out, we will be aware of that, and take it into account when we’re doing our development,’” Andrew Cornwall, senior analyst at Forrester, said.
Others are looking for more assurances.
“All of this is very complex and certainly gives rise to a lot of just concern,” Thomas Humphreys, compliance expert and content manager at Prevalent, told CIO Dive.
Customers have their doubts, too. Three in 5 are wary of AI, and nearly three-quarters want companies implementing the technology to make it more trustworthy, according to a KPMG survey released in January.
Depending on AI vendors for transparency and protection isn’t enough, analysts say. Enterprises must rely on their own abilities to audit tools, reinforce governance standards, and mitigate risks.
The U.S. Army has worked to identify the downsides and opportunities that come with adopting AI as part of a 100-day plan laid out in March.
“In the [Department of Defense], because of national security, we want to verify and trust,” Young Bang, the U.S. Army’s Principal Deputy Assistant Secretary for Acquisition, Logistics, and Technology, said during an AWS summit last month in Washington, D.C. “We want to adopt third-party algorithms, but we want to understand the risks associated with certain things, and then we will make informed decisions.”
With a growing appetite for emerging capabilities, the Army will deploy a 500-day plan to operationalize the findings, using a layered risk mitigation framework along the way.
“We’re trying to overcome the things that are going to prevent us from adopting third-party generative AI, and we’re doing that now,” Bang said during the conference.
Highly-regulated and risk-averse private sector organizations are taking a similar approach.
“We definitely have a very eager appetite for these capabilities, but we’re going to do it in a way where we can feel really, really confident that we’ve understood all the risks that are involved with it,” Strle said.