Dive Brief:
- Half a dozen technology companies submitted a letter to the U.S. House of Representatives' Task Force on Artificial Intelligence Thursday, asking for governance frameworks and policy proposals that better represent the roles of vendors.
- The letter asked for clearer delineation of obligations for AI integrators, or companies that retrain, modify and fine-tune AI models into other products and services. Legislators and regulators have primarily taken a binary lens to the landscape, categorizing players as developers or deployers.
- Current policy debates “lack the nuance necessary to capture this complexity and ignore key components of the value chain,” the group, which includes Salesforce, Kyndryl, Twilio, Box, Alteryx and Peraton, said in the letter.
Dive Insight:
The regulatory landscape is challenging and complex for all parties involved.
CIOs have grown more concerned about keeping up with the global patchwork and the impact of new rules on adoption plans. Vendors have had to keep up, too, championing some policy proposals and lobbying against others. Legislators have worked to bridge knowledge gaps to craft effective proposals but a comprehensive bill has yet to advance, lagging behind the pace of innovation and change.
“AI is quickly evolving and it's important that any regulatory framework has the flexibility to adapt and allow for innovation that benefits business and consumers,” Curtis Swager, Twilio's VP of government relations and public policy, said in an email to CIO Dive. “Getting the roles and responsibilities right from the start is a key to this work and we look forward to working with our government partners to achieve that goal.”
Advocates and players have stepped in to put a spotlight on the nuances needed to govern AI, from intellectual property protection to differing vendor roles within the landscape.
“While Congress moves to write broad legislation to regulate AI, the complexity and myriad use cases requires input from the companies that are playing a functional role in the AI ecosystem,” Michael Bradshaw, global practice leader for data, applications and AI at Kyndryl, said in a statement Thursday.
Technology companies have played a role in guiding policy thus far, attending senate hearings to share information with officials on the landscape, proposing governance suggestions and agreeing to voluntary commitments. The sector's influence has worried some academics and industry experts, but tech companies have continued to push for open dialogue.
“The public and private sectors each have their own strengths, perspectives, and resources to bring to the table that must be considered to ensure the ethical development and deployment of AI while also inspiring innovation,” Hugh Gamble, VP of federal government affairs and policy at Salesforce, said in an email to CIO Dive.
Regulators grappling with the complexity have oversimplified the player landscape, according to the half-dozen companies.
“AI developers, for instance, create the foundational models for AI applications, and therefore, should be beholden to regulations focused on transparency, ethical design, safety, and bias prevention,” said Gamble. “AI integrators like Salesforce, on the other hand, incorporate these pre-existing models into larger systems so they can be used for specific business needs.”
Gamble said regulations for AI integrators should emphasize proper testing and responsible use.