Dive Brief:
-
As AI systems become more advanced, worries about safety are multiplying — but tech firms are doing what they can. DeepMind, Google's London-based AI company, announced a "suite of reinforcement learning environments" in November to test the safety of AI systems.
-
The "gridworlds" environments run an intelligent agent through scenarios and measure its ability to carry out its function, also assessing whether or not it can do so safely, according to a DeepMind blog post. Though the gridworlds are "simple environments," the company hopes the research will serve as a building block for others working on AI safety.
-
The AI systems are tested on "safe interruptibility, avoiding side effects, absent supervisor, reward gaming, safe exploration, as well as robustness to self-modification, distributional shift, and adversaries," according to the report. The intelligent agents are unaware of the performance function applied to each environment.
Dive Insight:
If you ask Elon Musk, tech may be closer to Skynet than it is to household robots. But no matter the level of application, the implementation of AI warrants significant scrutiny.
The field is largely unregulated, and experts are encouraging legislators to take a light-handed regulatory approach. But biases can be introduced all along the pipeline, potentially derailing the technology from its original purpose.
DeepMind's testing scheme is an important part of QA that all AI companies should take. Creating a safe test environment will help train intelligent models before introducing them to environments where such system flaws could have an adverse affect. Cross-company collaboration and open-sourcing is steadily becoming the norm for AI as it becomes clear that no one can do it alone.