Editor’s note: This story is part of the CIO Dive Outlook on 2021, a series on the trends that will shape the industry in 2021. For a look at the business trends affecting other industries, see the Dive Outlook on 2021.
For years, the narrative around AI followed a heroic theme. The technology under development was set to help humanity advance on intractable problems.
Thus far, AI has shown it can help bring efficiency to essential processes, but the technology faces hurdles in solving more complex issues by itself, especially those that require close understanding of issues with nuance, such as political views.
"Not every problem is best solved through the use of artificial intelligence," said Eric Cornelius, chief product architect at BlackBerry, speaking Tuesday at CES 2021. "But it seems that a growing number of problems are at least able to be solved by AI."
In selecting the tasks AI can tackle within an organization, leaders should target the repetitive tasks that exploit AI's ability to multiply, parsing through volumes of data no human could. More complex problems, such as identifying fake news, remain outside AI's purview — at least for now.
Ethical challenges also stand in the way of an AI-for-everything paradigm. Data scientists are perfectly capable of producing algorithms that can autonomously make every decision in a given process, but there's genuine concern over the impact biased systems can have on the subjects of its decisions.
One application where AI has shown its advantage is in cyberthreat detection. AI networks are capable of assessing and identifying malware at speeds that are incomparable to humans.
"To detect a piece of malware, you're looking at a training set that's got billions of files, each file having over a million individual features," said Cornelius. "You just start to wrap your mind around those numbers, and no human could ever fathom make a detection."
Though AI cybersecurity tools can strengthen an organization's defense, cybercriminals have learned how to flip AI's weaknesses against it, which reinforces the importance of data reliability.
For organizations, choosing where AI applications enter the picture should be guided by the impact algorithms can have on customers, and by the amount of high-level work it frees up from employees' desks.
AI models are helpful in reducing the amount of "low level, repetitive labor that humans have to do," said Kevin Guo, CEO of Hive.
At times operating behind the scenes, AI products can produce "massive gains in efficiency and productivity," said Guo. "Maybe consumers don't see it directly, but it is affecting their day-to-day lives."
Organizations have seen value in the augmentative capabilities of AI, a trend further advanced during the pandemic. Algorithms sped up routine administrative tasks and bolstered customer service right as humans were overwhelmed in capacity.
"AI is never going to build a bridge," said Cornelius. "AI may build really great plans, it may give us the best blueprints for a bridge and do all the soil analytics. It might even file the paperwork for us. It's not gonna build the bridge."
That's not to say AI can't play a role in complex problems. IBM's AI capabilities are playing a role in tracking the massive volume of plastic debris clogging up the oceans.
"We're using AI to help track the marine plastic, and develop more accurate and effective processes to essentially eradicate that," said Bridget Karlin, global managing director, CTO and VP at IBM.
"I think with things like fake news, it's actually a pretty hard problem," said Guo. "It's something we haven't done well." AI for content moderation has shown to fare better in identifying things like nudity, drugs or violence, that are less subjective.
There are steps companies can take to ensure their algorithms remain as unbiased as possible. It requires spotting how and where bias influences an algorithm's decisions.
"We do believe AI can be engineered to be fair, but it is a choice," Karlin said. "It needs to be properly calibrated and programmed accordingly.