Frank Baum was way ahead of his time when he led Dorothy to expose the Great and Powerful Wizard of Oz as a charlatan. The Munchkins believed that the wonderful Wizard of Oz who had suddenly landed in their lives was a magician – he could summon thunder and lightning; he could create arcs of electricity; he could even make a humanoid face appear on a screen and compose speech.
But when Toto pulled back the curtain, we all learned that the Wizard’s power wasn’t really magic at all: it was a combination of very good technology and an audience that wanted to believe in magic. And when people really want to believe in something, they have the ability to suspend their doubts; to simply accept it as beyond their comprehension. The curtain is always there and sometimes people decide not to look behind it.
Although Baum’s book was published in 1900 and the MGM movie debuted in 1939, the curtain has simply been replaced with an LCD screen. Just as in the original story, today’s awestruck audience is enthralled with the magic of AI, and the wizards behind most of the platforms aren’t in any hurry to dispel the aura.
Think of some recent videos of AI applications that you’ve seen. Everything from computer vision in autonomous vehicles to robots that can solve a Rubik’s Cube® in seconds to an app that can scan a pile of Lego® pieces and suggest what can be built. We’re captivated by the magic – and we usually fail to realize that behind every one of those applications is an army of people who painstakingly labeled and annotated still images, videos, and other types of unstructured data. It’s the accuracy of that labeling which provides the critically important training data for an AI model so it can then ingest and make use of images it hasn’t seen before.
Data labeling is absolutely necessary for the success of any artificial intelligence or machine learning or business intelligence implementation. It’s also tedious and labor-intensive. So it often gets minimized during many platform installations. Sometimes the work is assigned to a data scientist or a data engineer who usually doesn’t have the time to devote to this kind of effort. And it should come as no surprise that ‘shortcuts’ are fairly common: reducing the number of images that are labeled; making rough approximations (instead of neat bounding boxes) to identify subjects; using off-the-shelf image sets instead of samples from actual use cases; and, using images curated and labeled by another AI model.
Data labeling is such an important function because it’s the key to utilizing unstructured data in an advanced technology application, yet a large number of implementations fail to comprehend unstructured data. Even those data mesh and data fabric providers who claim to make ‘all’ of a company’s data available usually exclude unstructured data. And that leads to a disturbing result: the majority of companies that invest in an AI/ML/BI platform are initially disappointed with its performance. The disconnect is very rarely due to the platform; it’s generally not the fault of the users – the investment doesn’t provide the expected returns because vital parts of the company’s total data inventory were unknowingly left out. And no model can learn from or offer insights on data that isn’t available to it.
While it might be easy to blame the Wicked Witch for the omission of images, videos, audio files, pdf documents and other types of non-tabular data, the real reason is much less sinister. Although unstructured data makes up 80% of the new data produced each day, historically it hasn’t been considered a vital source of information. Clients are generally concerned with SQL tables, SAS datasets, and other ‘row-and-column’ sources, and platform providers are reluctant to disrupt their sales cycle by introducing variables that weren’t requested by the prospect. So the topic doesn’t usually get discussed - until the installation fails to deliver on its promises.
The next time you’re captivated by AI “magic”, remember that the best applications rely on people working behind the curtain to turn images, videos, and other non-standard data into the structured datasets that power those amazing models. When planning your own advanced technology implementation, it shouldn’t take a house falling on you to remind you to include your company’s unstructured data.
To discover how Liberty Source, through its DataInFormation suite of solutions, helps you unlock the value trapped inside your unstructured data, contact Joseph Bartolotta, CRO at [email protected].