Dive Brief:
- AWS unveiled several services to ease the implementation of AI security controls Tuesday, the latest push from tech providers to set guardrails around AI development.
- Customers using Amazon Bedrock, a hub for organizations to deploy and fine-tune foundational models, can implement application-specific safeguards based on use cases and responsible AI policies. The content filters work across models, including Anthropic’s Claude and Meta’s Llama 2.
- If enterprise customers specify topics models should avoid, the guardrails will detect and prevent queries or responses that fall into the restricted categories. The customizable guardrails are currently in preview, and Amazon has other features under development.
Dive Insight:
AWS is the latest technology provider to address enterprise worries over the secure adoption of AI.
OpenAI added privacy guardrails and built in security controls for its enterprise offerings. Salesforce placed restrictions on use cases for its AI tools. Zoom updated its terms and conditions, emphasizing customer consent, following persistent criticism of language that allowed the company to use customer data to train its AI systems.
Even after a year of generative AI, organizations still grapple with adopting and experimenting with the technology in a way that is safe and responsible.
Soon, AWS will add the ability to redact personally identifiable information so that summaries of customer and agent conversation transcripts within a call center do not have sensitive information, along with other use cases.
The company also released a Bedrock service to help customers evaluate, compare and select the right foundational mode for a specific use case based on metrics such as accuracy and safety. The service, which is currently in preview, has customizable options for automatic evaluations where customers can select criteria and upload their own testing dataset.
While more than half of businesses have adopted generative AI to some extent, just one-third of organizations have implemented governance around responsible generative AI, according to a PwC survey of 1,026 executives in the U.S.
Without a baseline of acceptable use policies, it’s hard for risk-conscious organizations to know where to start. Business leaders struggle to find appropriate use cases, citing it as the most common reason AI adoption is held back, followed by concern for legal issues, according to an O’Reilly report published last week.
“Like many organizations, we're working to balance our excitement for generative AI with our need to be informed and responsible from a security and compliance perspective,” said Kathy Kay, EVP and CIO at Principal Financial Group, in an email.
Technology providers of all sizes are working to bridge gaps for customers, resulting in a dual effect. For enterprises, solutions that target these challenges lower the barriers to adoption. For the providers, it ensures that all the infrastructure restructuring and lofty investments to support compute-hungry generative AI workloads don't go to waste.