Dive Brief:
- Salesforce-owned Slack updated its AI and ML privacy policy language Friday after customer concerns on data usage rights circled earlier in the week.
- “We could have done a better job of explaining our approach, especially regarding the differences in how data is used for traditional machine-learning models and in generative AI,” Slack said in a Friday blog post.
- The collaboration platform said it does not develop LLMs or other generative models using customer data, nor does it scan message content to train AI models. Slack said it did not change its policies or practices, but updated the language to provide greater clarity.
Dive Insight:
Policies that allow broad control over user data cause customers to worry — and for good reason.
Companies don’t want sensitive, internal data fueling third-party models. For organizations using generative AI, data ownership, data privacy and intellectual property protection are some of the top concerns, according to an August Alteryx survey of 300 data leaders.
Slack said customer data is never accessed by LLM providers. However, unclear policy language can erode customer trust.
“Some Slack community members have asked for more clarity regarding our privacy principles,” a Salesforce spokesperson said in an email to CIO Dive. “We’ve updated those principles to better explain the relationship between customer data and generative AI in Slack, and published a blog post to provide more context.”
As AI adoption expands, users are looking more closely at what kind of access and ownership vendors have over their data.
“What a company says it can do with your data and what it can actually do with your data can be very different things,” Shiva Nathan, founder and CEO of app development startup Onymos, said in an email. “The SaaS model was already a challenge for data security and privacy before the AI boom — it is going to get much worse.”
Zoom found itself in a similar situation in August as criticism mounted around its policy language and customer data use. Although the company updated the language to exclude certain data from AI training, the set of events kick started a broader conversation on data protection.
Reading the fine print of user agreements is crucial when planning to deploy AI tools within organizations. If a policy is unclear, CIOs and procurement teams should feel empowered to push back on vendor claims and request proof.