Dive Brief:
- U.K. regulators are assessing whether Alphabet’s partnership with Anthropic has impacted market competition, according to a Tuesday statement by the Competition and Markets Authority.
- The regulators said a request for public comments on the matter would close on Aug. 13, before any potential formal investigation commences. Alphabet and Anthropic did not provide CIO Dive with additional information related to the matter.
- Google did not disclose how much it invested in the startup as the race to embed generative AI capabilities ramped up in 2023. But customers of Google’s Vertex AI had access to the latest Anthropic models, and Anthropic leveraged Google’s infrastructure and chips. Amazon is also involved in the startup as a minority owner following a $4 billion investment. Anthropic said AWS was its primary cloud provider in March.
Dive Insight:
Regulators continue to keep close watch over AI partnerships.
OpenAI and Microsoft’s alliance came under question by U.K. regulators in December. In April, the Competition and Markets Authority invited comments on the broader landscape of AI partnerships, including Mistral AI and Microsoft’s and Amazon and Anthropic’s.
Then, Microsoft's hiring of senior employees and leaders from AI startup Inflection caught U.K. regulators’ attention last month.
Cloud providers have also contended with deeper scrutiny more broadly as British regulators sounded the alarm on cloud vendor lock-in. Vendors have pulled back egress fees and decoupled solutions in response.
Technology providers have more compliance steps on the horizon as the European Union’s AI Act goes into effect later this week, though enforcement will be phased through 2026.
Companies that operate in the EU will have to comply with the EU AI Act, whether or not the organization is based there. Luckily, analysts told CIO Dive the language of the rules closely mimics recent National Institute of Standards and Technology AI guidance and from the U.S. AI Safety Institute, easing compliance for organizations that are staying up to date.
AI rules in the U.S. are still fairly lax, with vendors largely left to voluntarily commit to standards.
While President Joe Biden’s October AI executive order laid the groundwork for better risk mitigation frameworks from NIST and research related to the technology’s safety, the federal government has left proponents of clear-cut laws limiting AI wanting more.
Federal agencies have emphasized that AI providers must still operate under existing laws, warning consumers of dangerous tactics, such as AI washing.