WASHINGTON — Consumers board a plane and trust the aircraft will get them safely from Point A to Point B without them needing to understand the physics and mechanics of takeoff and landing. But that trust in airlines doesn't mean that consumers don't also want — and need — the Federal Aviation Administration, Gene Kimmelman, president and CEO of Public Knowledge, a think tank specializing in intellectual property and internet law, said at an Axios roundtable on ethics and innovation in Washington, D.C., on Wednesday.
This principle extends to the regulation of advanced technologies like artificial intelligence (AI), automation and big data. The government is reactive — rarely on the cutting edge — and as AI sweeps through the U.S. economy, it must ensure regulatory bodies are prepared to handle the accompanying trials, Kimmelman said.
Some doubt the government's ability to handle AI's impending social and legal challenges. As an institution, the government is built for stability and not agility, Klon Kitchen, senior fellow for Technology, National Security and Science at leading conservative think tank The Heritage Foundation, said at the event.
There are growing calls to refund the now-defunct Office of Science and Technology Policy — a department that once led interagency efforts to understand and implement technology and advise the president — or establish its responsibilities in other agencies.
Working with academia and industry is critical to set norms, ethics and standards before it is too late, speakers said. Roundtables like the one that took place Wednesday, private-public partnerships and other avenues for dialogue are making early inroads on this process.
Whether regulation of AI will belong in the purview of specific agencies based on the type of technology being used — such as autonomous vehicles falling to NHTSA's responsibility — or to agency-superseding bodies is yet to be determined. The creation of an AI-specific body seems like an overstep for some experts. After all, the government has gotten by for several decades without needing an internet-specific agency, according to Rob Atkinson, founder and president of the Information Technology and Innovation Foundation.
As AI begins to permeate every domain of business — from ad placement and cybersecurity tools to autonomous vehicles and intelligent assistants — demand in accountability will not be uniform. A credit scoring system, for example, will be held to more stringent standards than a dating app, said Michael Hind, distinguished research staff member in IBM Research's AI Department.
An AI-based or AI-incorporating tool is just like any other product put to market, and companies will be held accountable to consumer and fair protection standards. But as consumers become more aware of how technology powers daily interactions once taken for granted, such as ad placement on a website, expectations of and demands for accountability and transparency will rise.
But fear of regulation or negative outcomes should not chill innovation or deter researchers from working in AI, according to speakers.
The industry needs to run straight into conversations about ethics and accountability, answer consumers' questions, set expectations and dispel myths upfront, according to Colin Tooze, director of public affairs at Uber. What that entails is clear explanations of the technology and why it generates certain outcomes. What it doesn't entail is deep, "under the hood" discussions of how a neural network functions.
Transparency is trust
Transparency in AI will be critical, speakers at the Axios event said.
The explainability of outcomes is necessary to build and maintain consumer trust, reduce fear of the unknown and make clear the potential of the technology. Explainability is also critical when companies have to answer to regulators, said Terrell McSweeny, former commissioner on the Federal Trade Commission.
The free market rewards output explainability, as demonstrated by consumers who flock to products that are simple and understandable, said Nicholas Degani, special counselor to the FCC Chairman. Because most AI systems have strict parameters on what they do, providing transparency around their function and output should be straightforward.
But the incentive structure for companies to proactively self-regulate and put structures in place now to avoid problems down the road is "perverse," according to Raffi Krikorian, CTO of the DNC. Twitter, for example, is incentivized to drive engagement on its platform as a business imperative, while reducing polarization can take a backseat.
While some companies are engaging in proactive self-regulation, market incentives to implement necessary protocols and establish the resources to figure out regulatory structures are needed, said McSweeny.
In cases where AI or another advanced technology fails to work as intended, there needs to be infrastructure to detect these missteps — although companies should already have conducted outcome audits to model potential negative outcomes, said Laura Moy, deputy director of Georgetown Law's Center on Privacy and Technology. When these systems do not perform as expected, there needs to be clear avenues of recourse for consumers, said Kristin Sharp, executive director of New America's Commission on Work, Workers and Technology.
Just like in cybersecurity, companies need to incorporate transparency, accountability and ethics into AI today — instead of as an afterthought once products have been brought to market. By joining the dialogue in its nascence, a company can shape the technology's future and minimize the regulatory and economic repercussions that will arise for latecomers and corner-cutters.