Clarification: This article has been updated to reflect Eric Horvitz’s comments regarding Microsoft's stance on AI ethics.The company has also provided a statement.
Dive Brief:
-
Microsoft, following the guiding principles of the Aether Committee (AI and Ethics Engineering and Research), has systematically terminated sales of AI technologies to potential customers that violate its human rights commitments, according to Eric Horvitz, technical fellow and director at Microsoft Research Labs, speaking at the Carnegie Mellon University K&L Gates Conference on Ethics and AI on Monday.
-
In the midst of user data manipulation, especially on social media platforms, Horvitz pointed out the "adversarial attacks on human attention for someone else’s gain." Under the guidelines of Aether, such attacks are considered "tyranny" and are therefore unsupported by Microsoft.
-
"We have an ethical imperative to apply AI the best we can" to make up for the shortcomings of human cognition, Horvitz said. He turned to the case of medical accidents and suggest AI should serve as the "safety net" to catch human errors but not rely on it completely. A system should still have "self knowledge so it doesn't know everything" and "knows when to ask people for help."
Dive Insight:
AI is much like an animal raised in captivity. It was able to survive in a somewhat controlled environment, but it is unknown if the same will be true once it's released into the wild.
It will likely be no different for AI when it transitions from a lab to the less controlled environment. But AI cannot always make up for what humans do not know and vice versa.
Advanced technology like AI was always meant to augment what already exists in human cognition, not to replace it. It is meant to help fill the voids in "blind spots," "biases" and "gaps" that naturally exist in human thinking.
Still, AI sometimes finds itself pitted against its digital biases. AI systems were originally "famished for data," which lead scientists to eventually build capabilities that are on a human-level of recognition, according to Horvitz.
"We believe it is very important to develop and deploy AI in a responsible, trusted and ethical manner. Microsoft created the Aether committee to identify, study and recommend policies, procedures, and best practices on questions, challenges and opportunities coming to the fore on influences of AI on people and society," Microsoft said in a statement to CIO Dive.
Monitoring the interaction between man and AI is one that is best addressed by the manufacturer itself or taken up by HR. If a bot were to ever act separately from its human interface, ethical, legal and privacy challenges will arise.
It is clear Microsoft wants to distance itself from any potential controversy that may arise from its AI offerings. The relationship between vendors and their customers is constantly tested by the shortcomings of a service. However, customers can also be at fault when they use a service in an unintended way, blemishing both their reputation and the reputation of the provider.