Dive Brief:
- AWS is pouring $110 million into university-led generative AI research as part of a program called Build on Trainium that promotes use of its ML chip, the cloud giant said Tuesday.
- The program provides compute hours for researchers to build AI architectures, ML libraries and performance optimizations. All advancements created through the program will be open-sourced. Participants thus far include researchers from Carnegie Mellon University and University of California at Berkeley.
- “Researchers involved in Build on Trainium will publish papers on their work and will be asked to bring the code into the public sphere via open-source machine learning software libraries,” AWS said in the release. “This collaborative research will become the foundation for the next round of advancements in AI.”
Dive Insight:
While AWS is partnered with chipmaker Nvidia, it also wants to challenge the computing status quo amid demand for capacity to support high-intensity AI workloads.
Nvidia sustained triple-digit revenue growth for the fifth consecutive quarter with a record quarterly revenue of $30 billion, according to its latest earnings report in August. The chipmaker has become the leading provider of AI processing power amid insatiable demand.
Nvidia also has its own version of a grant program for higher education researchers. The company provides cloud, hardware and resources to advance research in generative AI, data science, vision and other priority areas.
Amazon tends to hold a lead partner role with Nvidia when it begins releasing chips, according to Amazon CEO Andy Jassy. The cloud giant was the first to offer Nvidia’s H200 chips, Jassy said.
“While we have a deep partnership with Nvidia, we’ve also heard from customers that they want better price performance on their workloads,” Jassy said during the company’s Q3 2024 earnings call in October. “As customers approach higher scale in their implementations, they realize quickly that AI can get costly. It’s why we’ve invested in our own custom silicon in Trainium for training and Inferentia for inference.”
The second version of AWS' Trainium chip will begin ramping up this year, Jassy said.
“We’re seeing significant interest in these chips,” Jassy said. “We’ve gone back to our manufacturing partners multiple times to produce much more than we’d originally planned.”