Dive Brief:
- While companies are making progress deploying AI, more than half of enterprises overestimate their levels of maturity in deploying responsible AI models, according to a report from BCG GAMMA, a research group within Boston Consulting Group. The responsible AI framework evaluates the technology's potential effects on safety, privacy and society at large.
- Almost half of companies fall into the bottom two categories in terms of maturity, with programs that are lagging or in development, according to the report, which collected and analyzed data from senior executives at 1,034 companies with 2,000 employees or more. While 26% of companies say they've achieved AI deployment at scale, just 12% have fully implemented a responsible AI program as part of their work.
- Businesses most often struggle with three areas of responsible AI maturity: fairness and equity, social and environmental impact mitigation, and whether a system is able to safeguard human well-being and preserve human authority.
Dive Insight:
AI technology underpins critical company systems. It's becoming the backbone of services across the enterprise, from consumer-facing applications to tools that help coordinate manufacturing or logistics. But leaders have long struggled with the ethical dimensions of the technology, including privacy and bias.
Businesses are working to deploy AI at scale. For many, the ethical dimensions of AI are still out of grasp. Seventy-nine percent of organizations are exploring or piloting AI projects, but just 21% said their AI initiatives were in production, according to Gartner.
AI is expected to expand in the enterprise. Over the next three years, three-quarters of enterprises will operationalize AI, bringing the technology into production at scale, Gartner projected last year. The expansion will drive a five-fold increase in streaming data and analytics infrastructure adoption.
Companies find security or privacy concerns represent an obstacle in AI implementation, according to Gartner data. There are also challenges integrating AI solutions with existing architecture and the data volume or complexity.
But organizations are moving beyond risk to focus on the business upside of responsible AI deployment, Sylvain Duranton, BCG GAMMA's global leader and a co-author of the report, said in an emailed statement.
Benefits of responsible AI include brand differentiation and an upper hand in employee recruiting and retention, as well as a culture of responsible innovation, according to Duranton.
Customer expectations, risk mitigation, regulatory compliance and social responsibility also drive business leaders to seek a responsible AI deployment, according to the survey.
To deal with the potential ethical implications of AI, business leaders need to focus on the traceability and explainability of AI models, according to Daniela Braga, founder and CEO at DefinedCrowd.
"Before, you would code a piece of software and the result would be predictable," said Braga. "But with data, you train a model, a model learns and it's not predictable anymore." That difference has to do with the type, structure, representation and outliers of data.