“Fixing” AI bias
Do you want inclusive AI? Do you believe that fixing bias is the solution? Read this important paper by Abeba Birhane:
- “AI has become the hammer every messy social challenge is bashed with. The intrinsically political tasks of categorizing and predicting things like ‘acceptable’ behavior, pass as apolitical technical sorting tasks. As a result, harmful outcomes are treated as side effects that can be treated with tech-solutions.”
- “Underlying the idea of fixing bias is the assumption that there exists a single correct description of reality where a deviation from it results in bias. The ‘correct’ way often means the status quo.”
- “When we see bias, we see problems that surfaced from a field that has inherited unjust, racist, and white supremacist histories. Problems that have roots in the mathematization of ambiguous issues, historical inequalities, and asymmetrical power hierarchies.”
- “Even if one can suppose that bias in a dataset can be ‘fixed,’ what are we fixing? Is the supposedly ‘bias-free’ tool used to punish, surveil, and harm anyway?”
- “This calls for interrogating contextual and historical structures that might give rise to such patterns, instead of using the findings as input toward building predictive systems and repeating existing inequalities and historical oppression.”