“Fixing” AI bias
AUTHENTICITY // AI-BIAS
“We should use AI as an opportunity to reduce or even eliminate biases and discrimination from our societies.” Sounds great. However, the chance to get it right is not about technology. Humans have a long history of discrimination, social exclusion and inequality. You cannot fix this top-down with algorithms. We should work on the societal mechanisms that led to this inequality, instead of building algorithms that reproduce inequality. This has to happen in the physical world, bottom-up as well as top-down, across all social strata. Discrimination is precisely the function of an algorithm; otherwise it would provide no insight. Just like humans, algorithms cannot see without perspective. The idea that you can “debias” algorithms is based on the assumption that there exists a single correct description of reality and a deviation from it results in bias. But there is no single correct description of reality. We only have descriptions of our -desired- reality, and the question what our -desired- reality is cannot be outsourced to computers. We have to define this ourselves, dynamically, in interaction with each other. Also, not everything can be properly translated into data and variables with scale scores. Measuring means simplifying and forgetting. We forget that inclusion is not only about gender, sexual orientation or cultural background, but also about the right to -escape- from your cultural, social or ‘biologically’ imposed category. With AI algorithms, we’re doing the opposite: we categorize people on the basis of quantifiable traits such as gender, cultural background, or the length of their smile, and provide them access to services on that basis. While true inclusion is about being able to see people beyond their social or cultural categories.