Dive Brief:
- Nvidia had a record quarter, more than tripling revenues year over year, according to the company’s Q3 2024 earnings report. The chip manufacturer’s data center revenues hit $14.5 billion, up 279% year over year for the three-month period ending October 29.
- “The enterprise wave of AI adoption is now beginning,” Nvidia EVP and CFO Colette Kress said last week during the company's earnings call, pointing to a race to deploy AI copilots and assistants by software companies Adobe, Databricks, Snowflake and ServiceNow.
- Cloud providers drove roughly half of data center revenue growth, according to Kress, noting the strength of demand for Nvidia graphic processing unit technology from hyperscalers and a growing cadre of GPU-specialized CSPs. “Nvidia H100 Tensor Core GPU instances are now generally available in virtually every cloud,” Kress said.
Dive Insight:
As chaos erupted at OpenAI, Nvidia was in the process of remaking the very infrastructure enterprises — and cloud providers — are depending on to run large language model operations and scale generative AI capabilities.
AI-optimized infrastructure, including enhanced processing technology, platform solutions and data center capacity drove robust revenue growth for the third consecutive quarter.
“Investment in infrastructure for training and inferencing large language models, deep learning recommender systems and generative AI applications is fueling strong broad-based demand for NVIDIA accelerated computing,” Kress said.
An ongoing cycle of AI-related product and service launches has accompanied record spikes in Nvidia’s revenue growth.
The company introduced the GH200 Grace Hopper server platform in August, increasing bandwidth, memory and multi-GPU connectability, and announced plans for broad cloud-based deployments of HGX H200 chip technology earlier this month.
Kress confirmed that AWS, Microsoft Azure, Google Cloud and Oracle will be the first CSPs to offer H200 base instances starting next year.
The company launched an AI foundry for building and tuning customized generative AI enterprise applications in Azure public cloud last week at Microsoft Ignite.
“Customers can bring their domain knowledge and proprietary data and we help them build their AI models using our AI expertise and software stack in our DGX Cloud,” Kress said.
The DGX Cloud subscription service leverages Nvidia software, processors and AI expertise. It was initially deployed in Oracle’s public cloud infrastructure earlier this year.
“With each one of our partners, they rent a sandbox on DGX Cloud where we work together,” Nvidia President and CEO Jensen Huang said during the earnings call. “They bring their data. They bring their domain expertise. We've got our researchers and engineers.”
In addition to supporting hyperscaler build-outs, Nvidia infrastructure is fueling the emergence of a novel cloud ecosystem, according to Huang
“You're seeing GPU-specialized CSPs cropping up all over the world, and they're dedicated to doing really one thing, which is processing AI,” Huang said.
Grace Hopper instances are now available through GPU-specialized cloud providers and are “coming soon to Oracle Cloud,” Kress said.
Correction: A previous version of this article misidentified Nvidia's fiscal year. The company reported earnings for the third quarter of its 2024 fiscal year.