By: Jen A. Miller
• Published June 24, 2024
CIOs are carefully monitoring the potential advantages of deploying generative AI copilots within their technology stacks. Providers promise a smooth plug-and-play experience followed by quick productivity wins.
But the new technology is not guaranteed to fit in seamlessly. CIOs must rely on targeted tests to learn how tools can perform — and where potential issues lie.
“There’s been so much hype over the last 12 to 18 months around AI," said Kevin Miller, chief technology officer of IFS, a software solution provider. "It’s been pitched as this magic box where you just open it and it fixes everything. Realistically you have to have some goals for and some examples or use cases.”
A 30-day test rollout of an AI copilot can equip CIOs with critical information to support wider adoption plans.
To give a copilot the best chance of working, CIOs should do some basic data hygiene before the rollout, said J.P. Gownder, VP and principal analyst at Forrester. His primary experience is with Microsoft Copilot for 365, which he says does a good job training the permissioning of documents, “so if I don’t have permission to hear the answer, Copilot should not tell me the answer.”
However, if a company doesn’t have good data hygiene, documents aren’t permissioned in advance, which means AI could be getting into — and giving answers about — things not every employee should have.
CIOs should also be thinking about which workers are best suited to try out an AI copilot. Employees should know how to use effective prompts, and be aware that copilots, which are largely based on LLMs, can hallucinate and generate nonsense. “If you are taking the word of AI as sacrosanct, you are not going to be using it effectively,” Gownder said.
CIOs should not simply take someone’s word that they know how to use AI efficiently, he added, since “most companies are underinvesting in training and skill development.” It’s not uncommon for businesses to deploy a copilot with one hour of training per user, when best practices call for 10 to 15 hours, according to Gownder.
Getting feedback from early users
As with any other IT project, data can help CIOs tailor their approach as employees leverage copilots during the first 30 days.
Metrics help show how employees are using the tool, said Varun Singh, president and co-founder of Moveworks, makers of an enterprise copilot platform. Specific metrics will vary per company based on what they are hoping a copilot can help achieve. Companies may want to check if a copilot is helping to reduce the number of support tickets filed, for example.
CIOs and technology leaders can also ask users if the AI assistant’s response was helpful, but Singh noted that people are more likely to rate something as not helpful than helpful, much like someone is more likely to go online to say something about a negative experience than write a positive review. Singh suggests tracking to see if a user proceeded to file a ticket for a live agent, he said. “If they’re doing that, it means the copilot is not effective.”
In addition to training workers before launching an AI copilot, companies should offer continuous training opportunities, and support, throughout the first 30 days — and beyond. There can be synchronous training sessions, but also “virtual training on demand,” Gownder added.
Companies can also set up weekly office hours, or weekly sessions where “people can bring their successes and failures,” he said. Dedicated Teams or Slack channels for discussion can also enable peer-to-peer learning. Employee testimonial videos are helpful too, so they can see “someone at their organization who is using copilot effectively,” and will more likely trust a video of one of their colleagues at their own company instead of a vendor representative.
A copilot can also be used to gather feedback, whether it’s from chat logs or interactions, and summarize next action steps based directly on that information, said Gownder. That feedback “shouldn’t just go into this broad data lake that no one’s going to look after.”
As part of their training, testers should also be encouraged to play around with the copilot, even try to “break it somewhat,” Gownder said. “Ask it questions where it might not have the answer, or don’t provide context it’s looking at.” That can help determine if a copilot is hitting the mark, or not accessing the right information for an answer.
Taking action on user feedback
While companies should certainly conduct surveys, they should also make sure to talk to testers about their experiences, Singh said.
“Surveys often have pockets of data where we might be missing something important that doesn’t reveal itself in the big picture … with other forms of ratings,” he said. Talking to people can help CIOs and leadership teams “see how they’re feeling about the copilot and see if it’s helping or just making more noise. That’s also a way to tell “if they’re using it right or are they making bread recipes with it,” he joked.
This is often best done with intentional conversations outside of the IT setting.
“Technologists are always on the bleeding edge, and they can feel [if] a technology is not good enough or smart enough,” he said. A regular user may appreciate small changes that don’t seem cool enough for a technologist, but end up helping a company improve and expand upon its copilot use.
Article top image credit: gorodenkoff via Getty Images