Dive Brief:
- Meta and IBM launched a collective aimed at boosting AI innovation "while ensuring scientific rigor, trust, safety, security, diversity and economic competitiveness," the two organizations announced Tuesday.
- The AI Alliance is composed of more than 50 startups, organizations and universities already working on the technology. The group includes AMD, Dell Technologies, Hugging Face, Oracle and Intel.
- "We believe it’s better when AI is developed openly – more people can access the benefits, build innovative products and work on safety," said Nick Clegg, president, Global Affairs at Meta, in the announcement. "We’re looking forward to working with partners to advance the state-of-the-art in AI and help everyone build responsibly.”
Dive Insight:
IBM and Meta's latest alliance will aim to establish an open community that brings together developers and researchers to jointly address safety concerns surrounding the technology. The group plans to establish a governing board and technical oversight committee to establish overall project standards and guidelines.
Microsoft and OpenAI — which have taken a leadership position in the recent generative AI wave — are notably absent from the list, as are other big tech providers such as Google and AWS.
In response to enterprise concern over the safe deployment of AI, the field of providers has steadily added new features and capabilities surrounding AI safety and privacy.
Meta launched its Llama 2 large language model last summer, available free of charge for commercial and research use. The model was trained on 2 trillion tokens worth of publicly available data sources, Meta said.
Alongside the launch of the model, the company provided a responsible use guide and an acceptable use policy among other resources aimed at supporting safe AI development.
ChatGPT maker OpenAI has also addressed concerns over privacy and safe use of the technology through new functionality. The company added features such as the ability to turn off chat history in April, and later added more safety-geared capabilities to the enterprise-grade offering of the AI tool .
Earlier this month, AWS added new guardrails to its Bedrock service , letting users implement application-specific safeguards based on use cases and responsible AI policies.