Dive Brief:
- Companies building artificial intelligence must work to ensure transparency, evaluate bias and assess the values it represents, said IBM CEO Ginni Rometty, speaking Tuesday at the World Economic Forum, in Davos, Switzerland.
- As part of IBM's approach to AI regulation, Rometty announced the launch of IBM's Policy Lab, a new think tank within the company aimed at formulating policy initiatives around innovation. IBM put forth a "precision regulation framework" for artificial intelligence stemming from the work of the group.
- The news comes two days after Google CEO Sundar Pichai called for AI regulation in an op-ed published Sunday. Microsoft President Brad Smith, also speaking at Davos, said the world "should not wait for the technology to mature" before regulating AI. The two leaders' views differ in one key aspect: the EU's proposed five-year ban on facial recognition. Pichai suggested he might be open to a moratorium, but Smith disagrees with it.
Dive Insight:
As society grapples with the impact of AI, tech giants have a vested interest in shaping what a regulatory framework will look like.
Google's interest in AI is backed up by dollars. It leads other tech vendors such as Microsoft and Amazon in AI-related acquisitions. Internally, too, it has devoted efforts to expanding the AI capabilities of Google Cloud, a key part of its strategy as it seeks to expand its enterprise market share.
Microsoft, in turn, is likely to make AI a key component of its "tech intensity" strategy. The company sees itself as the provider of the technology that's set to reshape industries and their outcomes.
As governments ready their regulatory initiatives, the ideal balance will ensure the public is protected from the potential negative outcomes of AI while making sure it's not a bottle-neck for innovation. A triple bottom-line approach to AI in turn benefits vendors: If technology is perceived as a net-positive, then public perception will improve.
In its framework, IBM calls on companies to:
- Appoint an AI ethics official: Someone whose work can help help mitigate risk and improve public acceptance and trust of AI.
- Assess AI's potential for harm: Assessments should focus on the intended use-case, end-user, how reliant end-users would be on the technology, and the level of automation.
- Put AI front and center: In tech, the focus is often on making solutions invisible, or seamless, to the end user. IBM calls for disclosing the purpose of an AI system, making it clear to consumers and businesses that come into contact with it.
- Help make AI explainable: Already a part of regulatory texts such as the General Data Protection Regulation, the concept of explainable AI has IBM's seal of approval. Organizations are called to maintain audit trails surrounding the input and training of data.
- Check for bias in AI: Owners test the AI to minimize its potential for harm. For example, technology should adhere to anti-discrimination laws and statutes addressing safety, privacy, consumer protection and other sensitive contexts.
IBM's work in laying out key principles of AI could have significant influence over the tech landscape, provided developers see the framework as a safe set of commonly agreed-upon principles. The push to make AI easily explainable has already garnered support, with industry watchers forecasting it will play a key role in 2020 and beyond.