Editor’s note: This is a guest post from Sanjay Rajagopalan, chief design and strategy officer at Vianai Systems.
Technology’s dizzying progression often leads to the worry that machines will one day replace humans in several kinds of tasks — including many forms of knowledge work.
This worry is most likely unwarranted, and the reasons may not be what you expect. To understand that AI and machine learning technologies won’t replace humans anytime soon, we need to spend more time understanding how humans think, value and respond to technology.
Essentially, we need to look at technology through a human lens.
Humans hold machines to different standards
Humans place some intrinsic trust in other humans. We have confidence in certain human capabilities that we don’t typically afford to machines at the outset. We allow more room for error in other humans than we do when we are working with AI systems, for example.
We accept human fallibility as normal — and trust that they will learn from their mistakes. While we tend to forgive minor errors and to believe in teachable moments, redemption and second chances for each other, we allow no such leeway to machines.
Consider the idea of getting into the car with a human driver – perhaps a friend or a family member – who recently had a minor car accident. For most of us, there would be very little problem with this because we expect humans to learn from their mistakes and be more cautious going forward.
However, would our reaction be the same if the driver was an AI system? Without being assured that someone has addressed the problem that caused the accident definitively, say, has sent the vehicle to the shop for repair and “re-programming,” most people would not trust the machine enough to get back into the car.
For many tasks, we don’t have tolerance for even minor mistakes machines and AI systems might make, as the errors are assumed to be due to something fundamental.
We may become tolerant to machine errors over time, such as when we ignore a bug in a program, but we never expect it to fix itself spontaneously. We expect some degree of perfection from machines — and since this is an impossible target, we will likely never fully bridge the trust gap.
The solution is to keep a human in the loop. Humans are most comfortable with machines as assistants to themselves and other humans, which is likely to continue with AI. For the vast majority of tasks, humans will need to continue working with AI systems, serving as the interpreters, intermediaries, proctors or overseers.
Humans value the journey as much as (or perhaps, more than) the destination
Generative machines may produce impressive outputs, but the process by which they do it can be boring to most people once the novelty wears off. They are simply static artifacts, frozen in time.
Humans are likely to find such artifacts inherently uninteresting, other than for the hype during the early days of the technology. With time, these “pieces of art” may become low-value clutter like photographs on a stranger’s lost iPhone.
To us, valuable art and artifacts are symbolic, meaningful and evocative, not just a collection of molecules arranged perfectly. Take premium wines, for example. While perhaps indistinguishable from cheaper varieties in a double-blind experiment, consumers may still be drawn towards the story behind the product.
We can extend this idea to generative AI, comparing paintings made by a machine to those by a human painter. A human painter has a life story, a style that evolved over decades of training and struggles they may have faced over time. Humans will always desire the humanity that, to us, makes art and other types of content beautiful.
Humans value opinions, but we don’t allow machines to hold them
Many people enjoy the company of other humans because they cherish and value individuality, unpredictability and serendipity – which other humans supply in spades.
For example, we may turn to our parents, elders or wise friends when we want opinions to help us make a decision. Perhaps we expect some of those conversations to be contentious, or we want someone to play the devil’s advocate and take the contrarian position.
As it stands today, however, we do not permit machines to have opinions on complex or controversial matters. How happy would you be with a human companion or associate who has no opinions or will not share them with you?
Engineers behind generative AI tools such as ChatGPT go to immense lengths to avoid controversial subjects, and build strict guardrails against bias, hate speech or profanity. This is proper and necessary, but then, the resultant entity is no longer an adequate replacement for humans.
As an example, if you ask ChatGPT to predict who will win the Super Bowl, it will offer an opinion by looking at the stats and summarizing what the critics say. However, if you ask who will win the presidential election, the system will steer clear of the topic, pointing out that it is merely a machine incapable of having opinions.
AI tools avoid political issues because of the potential for controversy and negative publicity for the makers behind the technology. AI system creators are aware of these de-facto restrictions on AI. They must consider what is controversial and subjective when creating them.
AI working to enhance human capability
We need not concern ourselves with the notion that AI will replace humans, not because machines cannot imitate humans effectively, but because humans will have different expectations from machines than from other humans, at least in the near term.
Until human expectations change or individual machines live long, unique and complex lives – just as humans do – human-generated artifacts will always remain more valuable and poignant to other humans than machine-generated ones.
We need a shift of perspective from viewing machines as our replacements to viewing machines as partners and assistants – not because machines can’t do certain “intelligent” tasks independently, but because human expectations and constraints on machines will limit their adequacy.