As soon as AIs started to be deployed in the “real world”, they started to be accused of being biased. Are AIs inherently racist or misogynistic, do they always support the status quo ? Recently, while exposing a weird background detection “bug” in Zoom resulting in the erasure of his black colleague’s head, Colin Madland found out a similar issue in Twitter picture cropping algorithm.
As always, the debate raged between people denouncing the bias in the algorithm and those stating the development team reproduced their own biases. In this article, I want to help you understand how AIs end up reproducing such biases and what can be done about that.