Vorige week maakte het publiek kennis met Queen B: een AI-systeem dat persoonlijke data verzamelt en werknemers stuurt met stemberichten: “You look a bit tired, I suggest you grab a coffee”, “I would not formulate that sentence that way”. Queen B maakt deel uit van de speculatieve werkplek die Studio LONK ontwierp om ons te laten nadenken over de wenselijkheid van een datagedreven werkplek. LONK vroeg mij het aansluitende debat een duwtje te geven. Gaat Queen B ons effectiever maken op de werkvloer? Bekijk de video en lees hier de column:
Queen B and her swarm of robotic humans
A workplace that knows you better than you know yourself. A workplace that makes you do your tasks more efficiently. Just hand in your data and Queen B will guide you towards a quantified heaven where all your inefficient behavior will be eliminated. The more data you share, the more personalized and effective Queen B’s work-advice will become. Effective? Effective to do what? Effective to improve our workplace? Or effective to make our behavior more programmable to employers and companies?
Data-driven technologies are one of the most misunderstood technologies of our time. Because these systems can detect patterns in our data that the human eye cannot see, we think that they are magically smart. So we all became members of the Big Data Church and started to transform our lives into data-driven machineries. We follow the high priests from Silicon Valley who tell us that their AI systems know us better than we know ourselves and we plug in our fridges, fit bits and workplaces. We blindly use their digital assistants, platforms and algorithms to determine what we want to watch, listen and buy, and who we want to date or hire for a job. We unquestioningly believe Elon Musk when he predicts that artificial intelligence will surpass all human intelligence: “Just outsource your thinking process to our AI-system. Don’t trust your intuition, don’t follow your senses, and don’t follow your personal ideals and intentions. Instead, follow the ideals and intentions that we programmed into our AI-system. Because after all, AI knows best!”
Dear participants of this evening. Our sky-high expectations of AI are based on totally misguided conceptions of what AI is. Stop listening to these marketing mantras from Silicon Valley and their Big Data Church. Listen to the scientists who investigate and work with artificial intelligence and big data. Listen for example to mathematician Cathy ´O Neil who reminds us there’s nothing neutral about data. In her book ‘Weapons of Math Destruction’ she shows how social norms, stereotypes and simplifications are hardwired into data-driven technologies and spread on a larger scale. Or listen to mathematician Edward Frenkel who compares our enthusiasm for data-driven technologies with an eleven year old child who has just learned geometry, and is so excited about it that he thinks the whole world can be explained by circles and triangles. Or listen to AI-experts like Gary Marcus who explain that AI-systems are not necessarily intelligent since there is more to intelligence than pattern recognition and computation. Humans can learn things in one context and apply it in another. Computers can’t do that. When it comes to intuitively understanding social situations, a two-year old toddler already surpasses a computer. And when we look at contextualization, common sense and reasoning ability, humans outperform computers. We need to demystify AI, and understand it as an advanced form of statistics, maybe a form of ‘statistics on steroids’. Yes of course, computers are better at playing chess and Go, which is great, but humans are not as straightforward as Chess pieces, and our behavior in the workplace is more complex than a game of Go.
So, instead of worrying about an artificial superintelligence that will surpass us, we should be worried about the data-driven technologies; the AI-systems; the Queen B’s that replace human decision making, without having any understanding of our complex ambiguous world. When we start to compress human behavior into simplified scripts and data-models we lose human diversity out of sight. Many aspects of our behavior are not quantifiable and Queen B cannot understand why particular patterns in our behavioral data appear. When you are for example raising your voice at work, the system cannot understand if you are exceeding the maximum amount of decibel because you are unaware of disturbing other colleagues, or because there’s an emergency situation.
We think a system like Queen B knows us better than anyone else, but the only thing Queen B does is nudging our choices so she can better predict our choices. Queen B is programmed to make us work more efficiently and more predictable. But inefficiency is not the enemy of effective work. The shortest route from A to B is not always the most beneficial one. Detours are inefficient, but they lead to productive surprises and experimental behaviors that are crucial to innovation. Humans are messy, ambiguous and creative beings, who don’t want to behave rational every minute of the day. How many imperfections will Queen B allow us? Will she allow us to take alternative routes and behave irrational, or will she transform us into rational, predictable robotic humans? Do we really want to make computer programmers and systems like Queen B the engineers of our behavior? The decision is up to us.