Dive Brief:
- Just half of enterprises have a responsible AI program and the majority of those programs are limited in scope, according to research from the MIT Sloan Management Review and Boston Consulting Group. The organizations surveyed 1,000 executives working for companies with $100 million in annual revenues or more.
- Leaders cite deficiencies in talent, training and leadership involvement as key challenges against making responsible AI a priority.
- But there's a benefit to having responsible AI programs. The initiatives translate to business benefits at a higher rate than businesses the study calls "non-leaders." More executives at leading companies credit responsible AI with improved products, innovation and brand differentiation than non-leading counterparts.
Dive Insight:
Responsible AI is an umbrella term encompassing fairness and equity, social and environmental impact, and privacy and safety.
A more precise definition depicts responsible AI as systems that are consistent with a company's organizational values, while still delivering transformative business impact, according to Steven Mills, chief AI ethics officer, managing director and partner at Boston Consulting Group.
"The end there is really important," said Mills, co-author of the report. "I can build it responsibly, considering my values and deliver business impact. Some people think you can do one or the other and I'd argue that's a false choice. You can absolutely do both and you should do both."
Consider the shift in enterprise perception of AI in recent years. The technology has edged into day-to-day operations, and moved away from its reputation as a fringe technology concept.
With maturity, comes — or should come — responsibility.
Among the companies surveyed for the report, the consensus is a mature responsible AI strategy leads to better business outcomes.
"Having in place a responsible AI program reduces the risk of scaling AI," said Mills."Companies that had a leading responsible AI program with mature AI had 30% fewer system failures than those that just had mature AI alone."
Scaled AI implementations have more systems, therefore more opportunities for something to go wrong, he said. But a responsible AI approach can increase the chances failures are caught and addressed earlier.
Companies looking to implement a responsible AI playbook can start by looking at the alignment of technology and values.
"It's more of a mindset shift," said Mills. "But when you do that, it really starts to help: You integrate it into your culture and think about why we're doing this more than just trying to buy down risk."