On the heels of the rising ChatGPT tide and expanding business use cases, vendors are now adding generative AI capabilities to CRMs and ERPs.
Tech giant Microsoft launched generative AI capabilities across business functions including CRM and ERP systems via Microsoft Dynamic 365 Copilot, the company announced Monday.
Enterprise software company Salesforce also added generative AI capabilities into its CRM solution, the company announced Tuesday. Dubbed Einstein GPT, the generative AI solution enables customers to connect real-time data from Salesforce with OpenAI’s advanced AI models out of the box or build their own external model.
OpenAI’s ChatGPT will also be available through Salesforce-owned Slack, providing conversation summaries, research tools and writing assistance for drafting messages.
While Microsoft claims to be "the world's first AI copilot in both CRM and ERP," Salesforce is also claiming the mantle of the "world's first generative AI for CRM." But the companies are unlikely to be the last vendors to add similar capabilities to software solutions.
“With near market saturation of cloud CRM customers, the next phase for CRM vendors is going to compete on the generative AI capabilities of the applications and platforms built on cloud-native infrastructure and foundational AI models,” Jason Wong, distinguished VP analyst at Gartner, said in an email.
Between 2021 and 2025, AI software spending is set to nearly double, reaching $64 billion, according to Forrester data in October.
“ChatGPT was the cherry on the cake of a trend that started a few years ago,” Arun Chandrasekaran, distinguished VP analyst at Gartner, said in an email. “I do believe that most enterprise users will consume these large AI foundation models via such intelligent apps, where the AI models are natively integrated with their existing workflows and applications.”
For most businesses, enhancing CRM or ERP solutions with generative AI capabilities can improve workflow automation, customer experience with contextual information and content generation, according to Chandrasekaran. Other natural language use cases could benefit too, such as summarization and enterprise search.
What to watch
Behind the veil of public and enterprise hype, there are potential risks that tech leaders need to recognize once vendors add generative AI capabilities to their products.
Leaders must consider how vendors address challenges such as "hallucinations," or unexplained results, demystify the black box nature of underlying models, evolve and update models and minimize legal and compliance risks, Chandrasekaran said.
Ethical considerations must also be top of mind.
“Historically, companies, regardless of size, have been challenged with deploying AI applications while adhering to the guiding principles of responsible AI,” Bill Wong, principal research director at Info-Tech Research Group, said in an email. “Given that, one should expect that most vendors offering generative AI capabilities have not adhered to the guiding principles of responsible AI.”
More than 4 in 5 leaders believe responsible AI should be a top management priority, though only one-quarter of leaders say their organization has a fully mature responsible AI program, according to a MIT Sloan Management Review and Boston Consulting Group survey of more than 1,000 respondents published in September.
In general, ethical AI frameworks cover privacy, model explainability and transparency, safety and security, fairness, governance and accountability.
“While the benefits of increased productivity and personalization are compelling from these announcements, the organizations should be aware of the potential risk that can come with generative AI-based solutions,” Bill Wong said.
Both Microsoft and Salesforce have stated that their teams followed ethical best practices in developing the launched products. Because of this, Bill Wong said customers should be able to ask and receive answers to the following questions:
- Privacy concerns: How is sensitive customer data used? Who and what applications have access to the data? Is the data secure?
- Bias concerns: What data is being used to train the AI model? Is it representative of the customer’s perspective? Who maintains the model’s performance over time?
- Explainability concerns: How are decisions or predictions made that involve customer data? What other data is being used to train the model?