UNESCO’s seminal report from 2019 showed that AI-powered voice assistant tools like Alexa and Siri were perpetuating harmful stereotypes and sexist abuse directed at ‘feminized’ technology was even anticipated by tech companies.
In this example from UNESCO, If there is a certain bias in the data, it will unmeritedly cause biases in the output. Hence, our statement: ‘bias in = bias out’. And in the shared example, the developers apparently already knew about certain imbalances and biases in the data. So, how to overcome this?