ChatGPT's public release took the enterprise tech world by storm one year ago, launching a monthslong race to embed the technology and gaining the attention of global regulatory agencies in the process.
The landscape changed dramatically as companies grasped the technology's potential. Most CTOs changed product roadmaps because of generative AI and nearly one-third adjusted their strategy, according to SoftBank data. Two in 5 technology leaders even shifted talent priorities to support generative AI needs.
As CIOs began experimenting and implementing the technology, questions emerged related to costs, copyright issues and data protection. Transparency surrounding AI systems also took over public discourse.
But enterprises pushed forward, determined to capture the benefits of generative AI. More than half of executives say their company has implemented generative AI to some extent, according to a PwC survey.
Even as generative AI and the large language models that fuel them become a more common addition to enterprise tech stacks, CIOs still face obstacles to adoption that require caution moving forward in the new year. As leaders champion strategy and implementation plans, they’ll also need to know when generative AI is not the best tool for the job.
The most popular foundational models that enterprises are integrating into workflows require large data sets that most businesses are not in a position to build, according to Erik Erlandson, senior principal software engineer, emerging technologies at Red Hat.
The top 10 major foundational model developers have a long way to go in terms of transparency, according to an October report from researchers at Stanford University, Massachusetts Institute of Technology and Princeton University. The researchers gathered publicly available information and scored the models based on the transparency of resources used to build the foundational model, details about the model itself and the downstream use.
Meta’s Llama 2 received the highest score at 54 out of 100, according to the researchers. OpenAI’s GPT-4, which powers the company’s enterprise-version of ChatGPT, scored 48%, Anthropic’s Claude 2 scored 36% and Amazon’s Titan Text came in last at 12%, according to the report. When the study was conducted, Titan was in private preview and was not generally available, Amazon said.
While technology leaders are familiar with working with vendors, they aren’t as familiar with what questions they should ask about model details and limitations, Erlandson said.
“The tricky bit is that, while we have 20 years or more of well-developed legal knowledge and licensing around how to define open source and what it all means, we have much less of this in terms of these new foundation models,” Erlandson said.
Room to improve
Generative AI showed CIOs and enterprises across industries are willing to forge ahead to gain benefits even when there are looming outsized risks.
The majority of businesses said they weren’t working to abate risks, according to a QuantumBlack, AI by McKinsey survey in August. The ones that are, however, are using acceptable use policies and training opportunities to equip workers with the right guidance.
Principal Financial Group established a team of compliance and cybersecurity experts to better understand the rapidly evolving landscape and ensure proper guardrails are in place, according to EVP and CIO Kathy Kay.
“Our responsible AI steering and working groups have been steadily refining our policy on AI, while continuing to support responsible exploration and testing of ideas,” Kay said in an email. The team works closely with key vendors to develop capabilities in controlled settings that support the company’s cybersecurity practices.
Vendors have had to adapt to a more risk-aware flock of enterprise customers, too. OpenAI added security guardrails and privacy options as well as enterprise versions of its chatbot ChatGPT. As attention turned to copyright law in relation to training data and generated outputs, Microsoft and Google said they’d step in for customers if they faced legal risks due to using the tech giants’ products and services.
Nearly half of business leaders plan on investing more in responsible AI in 2024 than they did in 2023, according to AWS research conducted by Morning Consult. The report defined responsible AI as the design, development and use of the tech in a way that’s fair, accurate, secure, safe, transparent and inclusive.
Businesses still have room to improve in terms of embedding safe design principles into their AI.
“If I look back at the year 2023 …. organizations have thought about responsible AI as a nice-to-have, and not necessarily something that they must do,” Diya Wynn, senior practice manager of responsible AI at AWS, said.
The three biggest hurdles business leaders said they face in adopting responsible AI are the fast rate at which the technology is evolving, not enough awareness or education and a lack of regulation, according to the report.
The expected shift to view responsible practices as a business imperative derives from regulatory and internal pressure, Wynn said. Nearly half of business leaders say their board members pushed for the development of plans and strategies that promote responsible AI, or expect them to in the next year, according to the AWS report.
With regulatory pressure expected to persist, enterprises that defer safe and responsible development and use of AI systems risk eroding customer trust, reputational damage, revenue loss, regulatory backlash and criminal consequences.
More than one-third of business leaders believe they would need to spend more than $1 million on damage control related to irresponsible design, development and use of AI, according to the AWS survey.
Reining in the hype
The past year was defined by the hype surrounding generative AI. Use cases began cropping up across industries and individual enterprises, from healthcare to retail and software development to HR.
While AI has always been broadly applicable, companies struggle with where to start because it has become more instantly accessible, said Vincent Yates, chief data scientist at Credera and founder of the AI Global Council.
Even as early adopters make headlines, the majority of businesses are quite early on in their journey to implement generative AI.
“I’m not sure most organizations have any real strategy,” Yates said. “In order to build a strategy, part of what they have to do is recognize what are the options we could do and sit down and really spend some time thinking about that.”
Generative AI isn't always the best solution to enterprise problems.
Amanda Stent, professor and director of Colby College’s Davis Institute for AI, uses a small set of rules when talking with people around the country about generative AI and in the institute’s research.
“First is, is this a thing that needs to be automated,” Stent said. “If it needs to be automated, does it need to be automated using AI? Because quite often, you can automate it using something that’s much more inspectable like regular code, just a straight-up algorithm.”
Erlandson shared a similar sentiment.
“We’ve done some experiments where we use generative AI and then tried some stuff that wasn’t generative AI,” Erlandson said, referencing a project at Red Hat. “The interesting thing was, at least in this particular case, the manual non-learning model algorithm outperformed every generative model we played with.”
Enterprise technology leaders know it’s counterintuitive to embed a costly technology solution just for the sake of it, but the pressure to adopt generative AI has blurred that line for some.
Even vendors are expected to rein in some of their AI additives, according to Matthew Miller, principal analyst, AI, automation and analytics at G2.
“Some of these generative features within certain products and across certain categories probably will fizzle, will go away, because it's not making a big impact, it’s costing the businesses and vendors money,” Miller said. “There will be a counter-movement as people realize it's not always necessary in every use case.”
Correction: This story has been updated to indicate Amazon Titan Text was in private preview when researchers at Stanford University, Massachusetts Institute of Technology and Princeton University conducted their study.