Dive Brief:
- Google Cloud is opening up its infrastructure for Hugging Face developers to train and run their AI models, the two companies said Thursday.
- Developers can train, tune and run Hugging Face models with Google’s Vertex AI or choose to have more control over the infrastructure via Hugging Face-specific Deep Learning Containers on Google Kubernetes Engine.
- The Hugging Face platform will feature Vertex AI and GKE deployment options in the first half of 2024, the AI startup said.
Dive Insight:
Cloud providers are trying to position themselves as the go-to place for deploying and tuning generative AI models, creating marketplaces to give enterprises options and vying for partnerships with leading model providers.
Hugging Face has around 300,000 models on its platform, according to its website. The agreement marks one of the most significant AI-related partnerships for Google Cloud given the volume of models and developers on Hugging Face, a Google Cloud spokesperson said.
Hugging Face models are also available to enterprises via Amazon’s SageMaker JumpStart and Microsoft’s Azure Marketplace.
Enterprises take different approaches to reach their generative AI goals. Principal Financial Group, American Honda and Kraft Heinz are among those businesses leaning on existing relationships with vendors as a way to kickstart AI efforts.
Retailers and consumer packaged goods companies are most likely to combine in-house AI capabilities with vendor-developed LLMs, according to a Databricks survey. Around one-third of financial institutions are using LLMs from vendors, while half are buying and building solutions.
The recent focus on AI has also illuminated the many potential ethical risks of deploying the technology as well as the importance of long-term reliability. Cloud providers’ growing AI model marketplace strategy gives enterprises another reason to expand current contracts.