An AI apocalypse?


Last week I gave my guest lecture “An AI apocalypse?” at the Institute for Interdisciplinary Studies, University of Amsterdam. Core message: the existential threat of AI is portrayed as a risk for the future, while current AI is already threatening us existentially. Not by suddenly eliminating all humans, but by gradually devaluating and reducing humans to an economic function. With students from various countries and disciplines we discussed AI (beyond) doom, AI (beyond) utopia, capitalism, the anthropomorphization of AI and the machinization of humans. Sharp observation from one of the students: AI is a good populist as it turns complex realities into seemingly simplicity. Compliments to Lisa Doeland for developing and teaching the fantastic honoursmodule Apocalypse.

Research paper


My research paper with Ciano Aydin is published in AI & Society. What seriously concerns me, is how we allow AI to shape our idea of what it means to be human. In this paper, we demonstrate how AI is understood as a representation of authentic as well as inauthentic intelligence. Both perspectives are problematic because they indirectly define and essentialize what being human(like) means. This results in a “limited human” who is robbed of indeterminacy and locked into the ideological apparatus of AI. Full paper open access here.

An orc smiling into the camera


People increasingly understand themselves in the mirror of artificial intelligence. AI increasingly (mis)informs us about what is the self as well as the artefacts that surround the self. Society’s preoccupation with AI becoming more ‘human-like’ makes us overlook an inverse, much more relevant question: In what ways are we becoming more machine-like? An example is how people try to figure out what a good AI-prompt might be to create more images like an existing one. In other words: how to use machine-friendly language?  “An orc smiling into the camera”.

Dying bird


“They admired the feathers and forgot the dying bird, and it is the dying bird that I am concerned with.” Philip. K. Dick – The Android and The Human, 1972. This resonates perfectly with our contemporary AI-driven society. When we finally realize that creating human-like intelligence in a digital artifact is impossible and foolish, we just simplify and mechanize what it means to be human in order to fulfil the AI recreation mission anyway.

Demystify AI


When it comes to big data, artificial intelligence and digitisation, we often think that we’re talking about immaterial things. But the cloud isn’t an immaterial abstraction in which our data magically becomes one with the universe. Keeping the cloud operational and training AI-systems requires servers that are emitting even more carbon than in the worst-case scenario predicted by scientists. Read my tip for a sustainable digital world.

“Fixing” AI bias


We should use AI as an opportunity to reduce or even eliminate biases and discrimination from our societies.” Sounds great. However, the chance to get it right is not about technology. Humans have a long history of discrimination, social exclusion and inequality. You cannot fix this top-down with algorithms. We should work on the societal mechanisms that led to this inequality, instead of building algorithms that reproduce inequality. This has to happen in the physical world, bottom-up as well as top-down, across all social strata. Discrimination is precisely the function of an algorithm; otherwise it would provide no insight. Just like humans, algorithms cannot see without perspective. The idea that you can “debias” algorithms is based on the assumption that there exists a single correct description of reality and a deviation from it results in bias. But there is no single correct description of reality. We only have descriptions of our -desired- reality, and the question what our -desired- reality is cannot be outsourced to computers. We have to define this ourselves, dynamically, in interaction with each other. Also, not everything can be properly translated into data and variables with scale scores. Measuring means simplifying and forgetting. We forget that inclusion is not only about gender, sexual orientation or cultural background, but also about the right to -escape- from your cultural, social or ‘biologically’ imposed category. With AI algorithms, we’re doing the opposite: we categorize people on the basis of quantifiable traits such as gender, cultural background, or the length of their smile, and provide them access to services on that basis. While true inclusion is about being able to see people beyond their social or cultural categories.

Research paper


My research paper paper with Ciano Aydin about authenticity and technology is published in Philosophy & Technology. My fascination with authenticity started when I saw The Truman Show. Truman’s ongoing search for authenticity, resulting in a sail trip in which he literally bumps into the cardboard walls of his staged robotic perfection-world, made a deep impression. But what is authenticity? And what does it mean to be(come) “yourself”? These questions have always occupied my mind and eventually resulted in my PhD research about authenticity in technological contexts. Authenticity has now become an increasingly important theme due to ongoing digitization and datafication. In this paper we explain what authenticity is, how this is influenced by emerging technologies, and how to understand the moral value of authenticity:


Fake me hard


Last weekend I visited the FAKE ME HARD exhibition which explores the reality of technology through futuristic installations, performances and debates. What struck me: the Kurzweilian AI-perspective was everywhere. Even our artistic imagination seems unable to truly transcend the capitalist AI-ideology of hardcore dataism and a mechanistic interpretation of intelligence.

AI criticism shouldn’t mean that we have to unquestioningly accept Silicon Valley’s AI marketing discourse before we can criticize it. AI criticism should mean: envisioning diverse futures beyond the AI-takeover hypercycle as well as beyond the ‘’human-centered AI” cliché. It should question the misleading PR story of “intelligence explosions” and AI-systems that “understand” us or “know us better than we know ourselves” rather than fuel these AI-industry driven perspectives.

However, I was impressed by the exhibition. Especially its broad perspective on what ‘’fakeness’’ means was very good. And there were some great works to discover. I was truly touched by Liam Young’s Planet City. The idea in short: the entire world population lives in a giant city occupying a fraction of the earth’s surface, freeing the rest of the world for rewilding and the return of stolen lands.

For the first time I came across a speculative image of the future which didn’t make me feel disgusted, angry, depressed or alienated. Although hyper density has unlikable aspects, I like Planet City for its ambiguity, sociological-reflective potential, and its warm colors and cultural imagination. Google Smart City (and other creators of deadly Brave New World tech-dystopia’s) eat your heart out.

To avoid confusion


“Nothing human is strange to me”. We can now remove the word ”nothing”. How to get rid of human doubt, nuance, curiosity, social friction, context and ambiguity? How to get rid of our own personal journey for meaning and interpretation? Let Facebook categorize all our content: satire / no satire / joke / no joke / right / wrong / zero / one. Computers says yes, computer says no.

We are becoming robots. “To prevent confusion”. No, imagine… confusion. It could lead to a good conversation or personal development. If this is what ”taking responsibility” for your platform looks like, in that case: no thanks.

Building AI in the image of man


Great article about the dead end of current AI. Sadly, these insights lead to the conclusion that we have to build AI in the exact image of human intelligence. I believe this is a waste of time, energy, money and resources for many reasons:

  • The false promise of ”human-like” AI or AGI only benefits a small tech-elite.
  • It is pointless and nihilistic to outsource being human to machines. It is a waste of time, money and resources we could better invest in a sustainable planet.
  • An intelligent collaboration with AI requires complementary traits, since there is no point in teamwork when all actors possess similar qualities.
  • “Human-like” AI demonstrates a lack of creativity and overdose of narcissism, self-centredness and anthropocentric self-interest.
  • The idea of AI as representation of human-like intelligence obscures the idealization of a mechanistic view of man by defining what it means to be human on the measly basis of what we cannot duplicate in AI.
  • Te idea of “human-like” AI obscures the dataistic, surveillance capitalistic ideology that shapes our self-understanding, behaviour and judgement of AI systems in such a way that it increases asymmetries in power.
  • Instead of AI we should be more inspired by nature’s intelligence. We don’t need more copy’s of ourselves, we need something less destructive and more innovative.

AI “emotion detection”


False and overblown AI claims, I found a nice one: “A Chinese AI emotion-recognition system can monitor facial features to track how they feel. Don’t even think about faking a smile. The system is able to analyze if the emotion displayed is genuine.”

  • These headlines keep popping op. However, scientific studies have proved that emotion-recognition systems are built on pseudoscience. You cannot infer how someone feels from a set of facial movements. Facial expressions are –not– linked to inner emotional states and do –not– reliably correspond to emotions.
  • Although Business Insider mentions this and interviews a social scientist at the end of the article, their headline and abstract suggests something else. AI-companies can make these false claims, not only because of uncritical clickbait headlines, but also because the buyers of these systems are often not that well educated in social science.
  • The AI-hype outstrips the reality, as the self fulfilling prophecy effects of these emotion detection systems are already out there. For example in online application systems that lead to self-censorship and policing your own facial expressions to game the system.
  • If AI emotion detection systems work, it’s not because AI has become ”better” at recognizing emotions, it’s because we start to behave like simple machines with exaggerated facial expressions that can easily be read by machines.