Dive Brief:
- Dell Technologies partnered with Nvidia to bring generative AI capabilities to on-prem servers and workstations, the companies said in a May 23 announcement.
- The joint offering, called Project Helix, pairs Dell servers with Nvidia graphic processing units to support pretrained large language models and AI developer tools for building chatbots and other applications. The solution will be available in July, the companies said.
- In a similar move, Nvidia partnered with Microsoft to infuse Windows 11 with a suite of new AI capabilities for PC users, the two companies said in a separate announcement.
Dive Insight:
The two partnerships bring to enterprise workstations the parallel processing power cloud providers are leaning on to handle an impending surge of compute-intensive generative AI workloads.
Silicon Valley chip-design company Nvidia, which briefly saw its valuation hit the elite $1 trillion mark last week, is a prime mover in both developments. The company’s GPU processors, originally designed to support video games and graphics intensive applications, are better suited to LLM workloads than traditional CPUs.
“Our collaboration with Nvidia enables customers to quickly deploy generative AI on-premises at scale using their own proprietary data, safely and securely,” Anthony Charles Whitten, Dell co-COO, said during a Thursday earnings call.
Data insecurity is one roadblock to widespread enterprise adoption of the new technology. The sheer size and complexity of LLMs and the generative AI they enable impose another impediment. Tech providers have been maneuvering around both as companies explore potential business applications.
“While there's a lot of discussion around these large, generative AI models, we think the more specific opportunity is around domain-specific and process-specific generative AI, where customers can use their own data,” Whitten said.
The transition to data centers and other infrastructure to support AI workloads is underway and Whitten acknowledged that it will take time for Dell to shift its business.
“Excitement for AI applications is ahead of GPU supply right now,” Whitten said. “AI-optimized servers are still a very small part of our overall server mix.”
The Windows 11 enhancements expand upon previous deployments of ChatGPT and other AI tools throughout Microsoft’s enterprise tech suite, including additions to Azure and Microsoft 365 Copilot announced in May.
Microsoft desktop workstations enabled with Nvidia RTX GPUs will allow developers to “work with models on Windows that were previously only available on servers,” the announcement said.
A new Windows Subsystem for Linux brings Nvidia’s AI software stack into the operating system, running workloads on GPUs identical to data center processors. This eases the transfer of models to Azure for large, cloud-based training runs, the companies said.