Dive Brief:
- Machine learning systems can be vulnerable to discriminatory biases, according to Yieldify CTO Richard Sharp in an article for Enterpreneur.com.
- A number of studies have shown that unconscious bias can slip into machine learning algorithms, like personalized online advertising, if efforts aren't made to ensure such algorithms are fair.
- This could be particularly vexing as machine learning moves into areas like credit scoring, hiring or criminal sentencing.
Dive Insight:
An examination of machine learning systems found they can discriminate by propagating prevailing social biases.
"If you train a machine learning algorithm on real data from the world we live in, it will pick up on these biases," Sharp wrote. "And to make matters worse, such algorithms have the potential to perpetuate or even exacerbate these biases when deployed."
Sharp suggests tech companies educate their employees about the negative implications of the biases in the models they are building. For example, Google and Facebook both recently developed training courses on unconscious bias.
Without taking control of these issues, companies' systems can end up deeply embedded with social biases, introducing risks to both the company and the user.