Dive Brief:
- Salesforce updated its AI acceptable use policy to place guardrails around its AI services, according to a document published Wednesday. The update comes amid increased industry scrutiny over generative AI data use.
- Customers are not allowed to leverage the company's AI products — or any third-party services linked to Salesforce services — for purposes related to child abuse, deepfakes, prediction of protected categories or automating decisions with legal effects, among other use cases.
- "These policy updates allow customers to use Salesforce products with confidence, knowing they and their end users are receiving a truly ethical AI experience from product development to deployment," said Paula Goldman, chief ethical and humane use officer at Salesforce, in a blog post Wednesday.
Dive Insight:
The policy update from Salesforce is the latest effort from the provider to address the risk concerns of would-be enterprise technology adopters.
In June, the company rolled out Einstein GPT Trust Layer, a service designed to let customers access enterprise-ready data security and compliance safeguards while leveraging generative AI tools.
"The GPT Trust Layer gives connected LLMs secure, real-time access to data without the need to move all of your data into the LLM itself," said Marc Benioff, Salesforce chairman and co-CEO, speaking in May during the company's Q1 2024 earnings call. "While they're using the LLMs, the data itself is not moving and being stored in the LLM. That is what our customers want."
As enterprise adoption of generative AI advances, more than half of IT leaders say inaccuracies and cybersecurity are associated risks of the emerging technology, according to a report from QuantumBlack, AI by McKinsey.
Salesforce, with its guidelines clearly stating usage limitations, is showing leadership among the provider ecosystem in terms of responsible AI, according to Juliette Powell and Art Kleiner, professors at New York University.
"On the surface, the new policy may look unenforceable, but it's an addition to Salesforce's Acceptable Use and External Facing Services Policy, which clearly states that violators could lose their Salesforce licenses," the professors said via email. "It will be really interesting to see which companies are targeted first."
The new policy from Salesforce comes as another major provider updated its own terms of service in response to criticisms over data use.
Zoom updated its terms and conditions to clarify the provider would be able to access customer content for safety and legal purposes, but not use any customer data to train third-party or its own AI models.