Companies are scrambling to inject advanced technology into business, vying for an edge on the competition or leaping ahead in development and creating a must-have customer tool.
A recent Accenture survey found 94% of business and IT executives say their organizations have "accelerated" or "significantly accelerated" the innovation pace.
Just 6% of the 6,672 surveyed say the innovation pace has remained the same.
Eagerness for cutting edge technologies and innovation can come with backlash. A glaring example is the development of artificial intelligence and the fear advanced computer technologies will deliver biased insights because of tainted — or incomplete — data sets.
Because a company can develop an advanced AI, doesn't mean it should. Enter ethics and the idea of ethical debt.
"Ethical debt is if you don't build artificial intelligence ethically from the ground up, it will be at best a huge reputational risk," said Rumman Chowdhury, global lead for responsible AI at Accenture Applied Intelligence, speaking Tuesday at Washington event. "If something goes wrong you will have to undergo significant overhauls of your system and at worse you will actually have irreparable harm to your company, your reputation and legal liability."
Taking the time to consider ethics is not immediately rewarding. If a company has done its due diligence, the best thing that can happen with ethical AI development is nothing goes wrong.
Just like cybersecurity, "nothing happens is the goal," said Chowdhury.
Police patrols vs. fire alarms
When an organization is weaving AI into a business or product, the world of responsible development can be divided into pillars, Chowdhury says:
-
The technical component of understanding the ethical implications.
-
Understanding the potential reputational risk and the imperative for making strong product design decisions.
-
Governance for establishing the roots of accountability, trainability of algorithms, standards and metrics.
Some businesses are more cautious, concerned about compliance risks with AI. Those high-regulated sectors, like banking and healthcare, work to include AI from the outset, Chowdhury said.
Retail and other businesses that have direct consumer impact are also relatively cautious.
Other sectors, however, operate with less development patience.
"To be perfectly frank, Silicon Valley is probably one of the most behind places in understanding ethics in AI," Chowdhury said, "because these are not people who have that sort of humanities-based background. They actually often, many people do think this is a pure technology [problem]. People argue that technology is neutral, that data is neutral."
"To be fair to them, that is not their training," she said.
That's not to say all tech companies operate without foresight. Leading companies are calling for responsible AI development and ethics, working with industry and government to promote development.
Intel has chimed in on recommendations for setting the U.S. national strategy on AI, championing responsible data sharing, fostering innovation, protection people's welfare and creating a policy environment that supports AI development.
As industry and government works to adopt AI standards and consider ethics, there are two options: establish police patrols or set fire alarms.
Police patrols set a group on governance, investigating every project to find bad guys, Chowdhury said. But it's unsustainable because if AI is going to be pervasive, who are the people that will be forever policing?
Instead, industry can set "fire alarms" to establish acceptable social norms and agree on standards, she said. An alarm raises the accountability flag and businesses and consumers trust a fire department — or governing body — will come and put the fire out.