U.S. federal regulations around AI and intellectual property remain unclear as enterprises add the technology to an increasing number of workflows. Appian’s CEO Matt Calkins wants to help close the gap.
“We don’t talk nearly enough about the AI intrusion on personal privacy and private information intellectual property, and that needs to be part of the conversation,” Calkins told CIO Dive.
The executive said he is working with U.S. legislators to help add transparency and training requirements to forthcoming AI regulation proposals.
Calkins included four main provisions in his proposed IP protection rules:
- All AI models must disclose all data sources.
- All AI use of private data requires consent and compensation.
- All AI use of personally identifiable information data requires anonymization and permission.
- All AI use of copyrighted information requires consent and compensation.
The guidelines came from Calkins' desire to boost trust in AI, a noteworthy hurdle as enterprise adoption efforts advance. Some industry peers are also joining the effort, like ScienceLogic Founder and CEO David Link.
“Matt and I fervently believe that emerging technologies gain either faster adoption … or greater trust if we actually understand the source of the data sets, and I think that’s something that’s lacking,” Link told CIO Dive.
For CIOs, not having clear IP protection rules creates a multipronged risk. Using an AI tool without knowing its data sources could open the business to unknown legal ramifications. Opacity can also limit an organization’s understanding of how a use case aligns with internal ethics standards.
Customers have voiced concerns about the transparency of AI models, too. Nearly two-thirds of customers reject the use of AI in customer service, according to a Gartner survey published in July. The lack of trust has created a rift that poses a threat to enterprises.
Zoom and Slack updated their terms and conditions following backlash against language that allowed broad control over customer data to train AI systems. Some vendors, like Adobe, have tweaked AI training policies to clear up confusion over data ownership as well, while others have expanded their control over customer data.
“The major players in AI are overreaching through fine print,” Calkins said. “Corners are being cut and trust is being lost in the frenzied rush to apply more information. Grabbing all this data at the cost of trust is completely the opposite of what we should be doing.”
Building trust
The current legislative landscape around AI is a patchwork of mostly proposed rules and regulations at local or state levels. The U.S. Copyright Office released the first part of its findings after more than a year of its probe into policy issues raised by AI, but information tied to the legal implications of training AI models has not yet been released. Courts are also working through a slew of copyright and intellectual property-related lawsuits against big tech companies.
“I don’t believe that current regulation is helping the industry,” Calkins said.
For now, enterprises must ensure trust between employees and customers and companies doesn’t erode.
More than half of enterprises cite IP infringement among the top risks of generative AI, yet just one-quarter are working to mitigate the risk, according to a McKinsey report published in May.
Organizations that have focused on confronting the risk have worked to put in place better IP guardrails.
Procter & Gamble prioritized stronger intellectual property safeguards before rolling out its internal generative AI tool to employees last year. American Honda VP of IT Bob Brizendine said the loss of IP was one of the first risks the organization wanted to address before moving forward on adoption plans.
With only 6% of enterprises holding off on generative AI according to Capgemini data, the need for clear governance and practices that promote trust is crucial.
Employee training is also a good place to start with mitigation efforts, Calkins said. If it’s unclear how an AI model is obtaining its answers, employees shouldn’t blindly trust the technology.
Enterprises are expected to increase spending to curb copyright infringement and avoid loss of IP in the next two years, according to Gartner research. However, these efforts are likely to hurt ROI and slow adoption.