Dive Brief:
- Microsoft is partnering with PwC and EY to help mutual customers deploy responsible AI systems using the two consulting firms' comprehensive best practices, Microsoft announced Thursday.
- As part of a broader effort around AI guardrails, Microsoft has committed to sharing knowledge about developing and deploying AI responsibly; creating an AI assurance program; and giving support for responsible implementation.
- “We know that these commitments are only the start, and we will have to build on them as both the technology and regulatory conditions evolve,” Antony Cook, corporate VP and deputy general counsel at Microsoft, said in the announcement.
Dive Insight:
Microsoft said it will share expertise, provide training curriculum, create dedicated resources, promote risk framework implementation, host customer councils and create a dedicated team of AI legal and regulatory experts.
The AI legal team will serve as a resource to support implementation of responsible AI governance systems in the enterprise.
As those software buyers increase focus on AI, the connection between IT and legal teams could become more cemented as the copious amounts of data needed to support generative AI workloads present risks, according to a report from software marketplace and review website G2.
In North America, more than one-third of enterprise respondents say legal teams are responsible for conducting security and privacy assessments on software purchases, according to the G2 report, which surveyed 1,700 global software buyers. That number rises to more than half for enterprise respondents in Europe, the Middle East and Africa.
Microsoft published a 41-page report on governing AI in May with the goal of offering ideas and suggestions to countries evaluating regulation options. Company executives have started to make public appearances in Washington D.C. and on cable news outlets discussing AI regulation and the benefits of the technology.
Executives at Microsoft, including CTO Kevin Scott, have also signed a statement equating the risks of AI to pandemics and nuclear war, calling global attention to mitigate the “risk of extinction from AI.”
Other signatories included executives at Microsoft partner OpenAI and its CEO Sam Altman. Altman has appeared around the world speaking to national leaders about AI in Congress, the G7 Summit and The White House.
Calls for regulatory action have come from the large companies making the technology, as well as advocates and AI groups. The Center for AI and Digital Policy filed a complaint with the Federal Trade Commission in March asking the agency to investigate OpenAI.
Despite its novelty, FTC Chair Lina Khan warned businesses involved in the growth of AI can still be held accountable by existing laws protecting against fraud.
Microsoft said it would play an “active role in engaging with governments to promote effective and interoperable AI regulation,” in the announcement.