Dive Brief:
-
IBM announced PowerAI DDL on Tuesday, a new technique designed to reduce the time it takes to train distributed deep learning systems, according to a company blog. The company said the new technology reduces the amount of time needed to analyze tons of data and come up with insights by optimizing data transfers between hardware components that run a deep neural network.
-
So rather than slowing down computations when spread across disparate systems, IBM was able to cut down on the time it took to train a model. The company was able to achieve a 95% scaling efficiency running on 64 IBM Power systems using the Caffe deep learning framework. The previous record, held by Facebook, had 89% scaling efficiency.
-
Big Blue says it is only releasing PowerAI DDL for its own hardware, however. Earlier this month, Facebook released a distributed neural network optimization under an open source license.
Dive Insight:
Training AI-based systems is difficult because they require so much data. Often, multiple computers must be used to accomplish this, which causes new challenges around data synchronization.
But IBM says PowerAI DDL can split tasks among multiple systems and then bring that data back together in a synchronized, efficient manner. That means AI-based processes could be completed much faster, allowing scientists or other AI-users to finish experiments or other tasks much more quickly.
While IBM’s new solution is really only viable for companies with huge troves of powerful hardware right now, that could soon change as technology continues its rapid advance. It’s just one more step in the evolution of AI and machine-based systems capable of completing tasks once believed impossible.