Few businesses operate within the geographic confines of national boundaries. Business technology platforms and tools have to cut across not only oceans, but language and cultural barriers too.
Artificial intelligence is increasingly woven into the fabric of communication and globalization, and imbuing multilingual and multicultural capabilities into its algorithms is critical. Thankfully, it doesn't take as long to teach a computer a new language as it does a human, but the process is stymied with resource limitations and nuance between and within languages.
Intelligent, multilingual systems such as Amazon's Alexa and Google Assistant are among the more prominent cases of broad language capabilities in modern AI applications. As more businesses replicate their success, the natural language processing community and market grows steadily.
Duolingo doesn't work on an algorithm
Even if they know a real person isn't sitting on the other end, customers interacting with a chatbot or AI-powered communication device don't want to feel like they are talking to a machine. The relationship has to be as human and fluent as possible.
The most basic model of a system processing a foreign language is pure machine translation. As any language student who has ever copied and pasted a block of text into a translation program knows, this process is often fraught with error. By translating each word and then combining the results into a string without translating the sentence as a single unit, the result often feels broken.
Effective, pure machine translation models are difficult to create. Researchers have to build out a knowledge base with the local culture and language, said Eyal Grayevsky, CEO and co-founder of Mya Systems, an AI recruiting assistant company, in an interview with CIO Dive
Mya uses an ensemble model that compares multiple models of natural language understanding to each other to get the best output. One of these models uses machine translation, and it would compare the confidence level to a native implementation to strengthen the model over time to become fully native.
A bespoke solution for whatever language a company wants to add to its portfolio would be ideal, but it's hard to get that much data and execute, James Maddox, CTO and co-founder at Mya, said in an interview with CIO Dive.
No two languages are alike
Adding new languages can be difficult because of unique structures and patterns. Similarities in language families, such as Romantic languages, can facilitate the process of adding new language capabilities.
But when adding a language outside of the families already in play, different processes may be in order, said Kevin Gao, CEO and co-founder of Comm100 Network Corporation, a customer service and communication provider, in an interview with CIO Dive.
For example, tokenization is easier in languages with spaces that delineate separate words, unlike the structure found in Japanese and Chinese.
Yet even within languages that use spaces, segmentation can have different demands. German is well known for run-on compound nouns such as "Rechtsschutzversicherungsgesellschaften" (insurance companies providing legal protection) or "Geschwindigkeitsbegrenzung" (speed limit) that make translating individual nouns difficult.
Once a sentence is segmented, words are mapped to vectors, where a series of numbers encode the meaning. For example, the numbers for the word "brighten" are similar to "illuminate" but different from "darken," Maddox said.
French, German and Spanish capabilities each took approximately eight weeks end-to-end to build and roll out, Maddox said. As Mya looks to add Asian language skills to its tool, starting with simplified Chinese, it will require a lot of custom work.
Google, which debuted multilingual support for the Google Assistant in August, conducted the multiyear effort in three phases: multiple language identification, understanding and recognition.
Not only does the assistant have to fluently understand languages including Spanish, English, Italian, German and Japanese, but it also has to identify and process language pairs in tandem so that a command like "turn off the lights" will be understood and executed no matter the delivering language.
Language hybrids, such as Spanglish, make multilingual capabilities even more complex.
Getting out of the data comfort zone
Sourcing data to train new language systems, like sourcing data for any AI project, is an ongoing struggle for companies.
Quality conversational data and a natural language corpus are needed. For niche offerings like an HR AI tool, data common to the HR space, such as a conversation about workplace benefits, is hard to find off the shelf, Maddox said. Companies often improvise with "bootstrapping techniques," creating the scenario in-house.
It's hard for small companies to pull this off on their own: The challenge demands a lot of people, talent and money to train algorithms, according to Gao. Working with AI partners or online published tools can help mitigate data deficiencies.
But feeding word translations into an algorithm without the cultural counterpart also leads to incomplete systems.
Even within the confines of one language there can be massive differences. In the U.K. for example, a word like "fortnight" is more common, whereas in most places in the U.S. it would sound outdated. And "chip" and "braces" all convey different things between the two countries — and no one wants to get potato chips when they think they are getting fries or teeth straightening apparatuses instead of suspenders. Within the U.S., regional dialects make uniformity even harder.
So much of language is localized and tied to cultural understandings. Tech giants operating out of North America or Europe must be cognizant of bringing cultural biases and assumptions into new markets.
This Spring, Google began crowdsourcing image recognition for its machine learning systems, targeting areas outside well documented understanding of western culture and affluence. Images unique to Asian, Latin American or African culture, for example, are not as well documented by researchers in the confines of a western upbringing and society.
Algorithm literacy outside of western areas is critical for any company's international strategy if it is looking to succeed in new international markets.
Launching into a new language market means years of preparation. Amazon's Echo devices started spreading in France this summer, adding to the assistant's English, German and Japanese fluency. Amazon started the migration far earlier with Alexa in fulfillment centers, then moved to early access customers to figure out what questions were common and suss out accents, reports WIRED.
The company has also made a multilingual transliteration system to convert between language scripts, creating a dataset of pairing English with Japanese, Hebrew, Arabic and Russian name counterparts.