In the coming years, enterprise leaders can expect to see AI applications sprint beyond back-office tasks, and focus more sharply on where their brand intersects with customers.
"What we've seen historically is it's been all the behind the scenes," said Jordan Fisher, co-founder and CEO of Standard Cognition, speaking on a Forbes CIO Next Virtual Series Wednesday. "Things are locked behind a computer screen."
But AI is growing closer to customer and employee touchpoints, Fisher said. The shift can deliver improvements in the workplace or customer experience, though experts caution against over-hyping the results AI can yield for businesses. Lack of access to data and skills frequently stands between tech leaders and the results they aspire to.
"For me, it's all about how do we meet the person where they are and give them the best possible experience, whether they're doing a job or whether they're shopping," said Fisher, whose company makes an AI-powered computer vision platform for retail checkout.
In the customer lifecycle, technology that can provide insights about customer engagement with a product, service or brand has "a lot of applicability, because you can capture that in-the-moment experience," said Rana el Kaliouby, co-founder and CEO of Affectiva, speaking on the panel Wednesday.
"You can use these insights offline to optimize experience or, even better, you can provide just-in-time support, which is where a lot of the applications in the industry are moving toward," said el Kaliouby.
But efficient AI products can't operate without data that guides decisions, such as a consumer's most frequent purchase or their preferred store location. Challenges in the space involve data privacy, and the safe-keeping of personally identifiable information. Concerns also touch on how algorithms make decisions about the consumers they interact with, and where their assumptions come from.
Enterprise tech players, including Google, Amazon and IBM, have expressed their desire to have a clear government framework around regulating AI.
The ethical implications of data collection made el Kaliouby decide her company would not participate in projects in the surveillance space.
"Often, consumers are not really aware that they're being monitored, and there's no transparency around how their data is being used," said el Kaliouby. "I'm a huge advocate for thoughtful regulation. I do think we need to work very closely with government and legislators and civil liberties."
Data analytics systems can also replicate the bias of its creators, amplifying skews that seep into algorithms throughout the creation process and can eventually impact humans. Thorough validation frameworks can help technologists stave off the risk of generating biased AI products.
Enterprise software promises to enable the augmented worker by automating tasks formerly performed by humans. Robotic process automation in particular delivered efficiency gains for companies undergoing the impacts of the pandemic. But in that replication process, biases can slip by, said Fisher.
"Even if you think you're doing the right thing, and just listening to what your labelers are saying in order to train your algorithms, that's actually the easiest way to get bias," he said.
These risks became clearer as automation laggards quickly embraced the technology in the face of increased demand or a reduced workforce. Once processes become automated they are unlikely to revert to previous versions.