Dive Brief:
- Nvidia is banking on everyday AI usage to maintain high levels of revenue growth as the technology is integrated into everyday processes, Jensen Huang said Wednesday, during the company’s Q4 2025 earnings call.
- "AI has gone mainstream and it's being integrated into every application," Huang said. "AI is used in delivery services everywhere, shopping services everywhere. If you were to buy a quart of milk that's delivered to you, AI was involved."
- The GPU chipmaker saw revenues nearly double year over year, growing 78% to $39.3 billion during the three-month period ending Jan. 26. Fiscal year revenues increased 114% to $130.5 billion, as Nvidia ramped up production of the Blackwell GPU family introduced last year.
Dive Insight:
Nvidia rode a post-ChatGPT wave of generative AI enthusiasm through two years of sustained and rapid growth. Consumption of the GPU giant’s hardware has steadily snowballed since February 2023, when the company reported flat growth on $27 billion in fiscal year revenues.
The Nvidia name became synonymous with the frenzy and hype surrounding large language model technologies, as its revenues more than quadrupled in 24 months.
A big chunk of the gains came from the largest cloud service providers.
Hyperscalers accounted for roughly half of $282 billion spent on data center hardware and software in 2024, a year that saw investments in infrastructure spike by 34% year over year, according to Synergy Research Group. Spending on AI hardware alone was an estimated $120 billion last year, IDC market research found.
Nvidia attributed roughly half of $35.6 billion in Q4 data center segment revenue to the large cloud service providers, noting sales had nearly doubled year on year.
"Large CSPs were some of the first to stand up Blackwell," Nvidia EVP and CFO Colette Kress said Wednesday, pointing to deployments in AWS, Azure, Google Cloud and Oracle Cloud Infrastructure to meet "surging customer demand for AI."
Enterprise customers accounted for the other half of data center hardware sales, as model fine-tuning, agentic workflows and GPU-accelerated data processing gained momentum, Kress said.
AWS, Microsoft and Google are each planning to continue high levels of infrastructure spend to build out AI compute capacity this year, a trend that bodes well for the GPU business. But Nvidia is banking on enterprise consumption to drive revenue growth, too, as the company prepares to roll out a new chip configuration — Blackwell Ultra — during the second half of the year.
Huang said he expects enterprise and industrial consumption of AI resources to overtake model training in the long term.
“We have two additional scaling dimensions,” said Huang. “Post-training scaling, where reinforcement learning, fine-tuning and model distillation require orders of magnitude more compute than pretraining alone, [and] inference time scaling and reasoning, where a single query can demand 100x more compute.”