WASHINGTON — Artificial intelligence will be one of the most transformative movements in human history, akin to humans standing upright in the evolutionary ladder or developing agriculture, according to Dean Garfield, president and CEO of the Information Technology Industry Council, speaking at a Politico event in Washington on Wednesday.
It's shortsighted to believe the effects of AI are not already impacting human society. The "toothpaste is already out of the tube," according to Rep. John Delaney, D-Md., chair of the House AI Caucus. While people are quick to jump to doomsday scenarios, there are a million steps and decisions humans can make between today and waking up in a Terminator-esque reality, he said.
As legislators and regulators parse out what to do — or, perhaps more importantly, what not to do — during AI's ascent, the importance of including academia, industry and civil society in the process is becoming clear. And as the race heats up in the international arena, now is the time to establish global leadership and drive the establishment of standards and principles for the technology.
Can the industry self-regulate?
Self-regulation does not demand a vacuum of government involvement, according to Garfield. A working relationship between government, private industry and academia is natural and necessary to establishing a set of principles for AI and making it human-centric.
Regulation may not be the first step, according to Delaney, pointing instead to the establishment of common goals consistent with social values. But like NASA's alignment of government and private sector in the space race, these goals cannot be accomplished in the confines of a single domain.
Civil society will play a critical role given that AI is not restricted to technical domains, but also prevalent in a variety of social applications, according to Rashida Richardson, director of policy research at AI Now Institute at NYU. And the AI of today, even machine learning that is fairly automated, still involves a significant amount of human touch and decision-making.
There needs to be an emphasis on establishing standards of fairness and equity in AI, though this process is difficult, Richardson said. For example, technical understandings of fairness are different than legal understandings, and failure to find a middle ground or account for both could result in dangerous AI applications.
The introduction of biases, intentional or not, can reinforce the "haves" and "haves nots," widening economic disparity or costing the underprivileged classes of society. While AI has tremendous potential to help solve social problems, it can just as easily make them worse.
And the ultimate barrier to AI adoption will not be technical, but social, Garfield said.
Government's role at home and abroad
China's authoritarian regime benefits by being able to force private-public partnerships for technology advancements, shortening the U.S.' lead in the global AI race, according to Rep. Will Hurd, R-TX, speaking at the event. But by directing federal spending and research, unlocking data for companies to use and using AI in government as a test case, the American government has power to help.
In the U.S., federal agencies and commissions are figuring out what AI means for them and how to use it, as AI caucuses and committees in Congress figure out their role. But in general, the federal government is notorious for lagging behind when it comes to technology, and it can be difficult imagining agencies still struggling to move off legacy hardware and migrate to the cloud as the new leaders of AI.
If the federal government doesn't do it, can states?
The California privacy bill was a recent example of a state taking up the mantle on a modern technology issue, but a smattering of different regulations at the state level can be bad news.
In the early days of mobile technology, for example, service often would stall across state lines, Garfield said. In fields like AI and data privacy, such a lack of portability, interoperability and widespread applicability could be bad for the development of the technology and American users.
But regulation often starts at the state level, and the federal government will need to work with states to nurture state regulation in areas it cannot control as well as find the areas where it should be taking up the mantle, according to Richardson.
Globally, a variety of national and international bodies are working on establishing a set of AI principles, though as of now there is not an obvious, singular institution that would be responsible for determining standards. This may demand a new international institution to handle the issue, Garfield said.
Canada and the European Union are steadily taking on leadership roles in the space, and if the U.S. wants to lead, it needs to drive standards for AI that allies will follow, thereby establishing a de facto international system, according to Hurd.
The U.S. needs to be the "bridge," identifying the best first areas for international cooperation, Garfield said.