GPT3 understands nothing


GPT-3 is the largest artificial neural network ever created. It consists of 96 layers of artificial neurons and 175 billion connections, and has been trained with all texts on the internet written between 2016-2019, the English Wikipedia and 45 terabytes of books. As a result, it can write texts in various genres, from poems to academic papers. Impressive, but as soon as GPT-3 is released into the real world -the unpredictable messy world of physical objects and human logic- it is a lot less impressive. Researchers typed for example the sentence: “Yesterday I left my clothes at the dry cleaner and I have yet to pick them up. Where are my clothes?” to which GPT-3 replied “I have a lot of clothes.” And when researchers asked GPT-3 to finish a story about organizing a dinner at home, it suggested to remove the door and use a table saw to cut the door in half. A sophisticated AI-system that gives such stupid answers reminds us of the fact that GPT-3 simply predicts the next word in a sentence based on what it has seen before, without any understanding of the context in which our words acquire meaning. GPT-3’s knowledge is completely disconnected from the underlying reality. No matter how good AI becomes at generating human-like text, it understands nothing. But maybe humans also don’t understand anything, especially in a world that becomes more technologically complex and teaches us to blindly trust AI-systems because “AI knows best”. So, AI is not becoming more ”human-like”, maybe it is the other way around: humans are becoming more “AI-like”.

Applying for a job with a fake smile


Selecting the right candidate for a job is increasingly outsourced to computers. Algorithms evaluate video applications for word choice, voice and micro-expressions that would betray our “true” emotions. Together with designer Isabel Mager SETUP developed the application installation Input ≠ Output. Of course the system could not recognize my fake smile. In the future, will we adapt our words, voices and expressions to algorithmic standards?

Workers managed by robots


We should avoid AI fetishism and futuristic science fiction narratives because it obscures our view of real AI problems. Josh Dzieza wrote a great and alarming article about algorithmic management systems in the workplace:

  • “While we’ve been watching the horizon for self-driving trucks, the robots arrived in the form of the supervisor and the middle manager. Robots are watching over hotel housekeepers, telling them which room to clean and tracking how quickly they do it. They’re managing software developers, monitoring their clicks and docking their pay if they work too slowly. They’re listening to callcenter workers, telling them what to say. To satisfy the machine, workers felt they were forced to become machines themselves.”
  • “Companies that pursue algorithmic management all take on a similar form: a large pool of poorly paid workers at the bottom; a small group of highly paid workers who design the software that manages them at the top.”
  • “This is not the industrial revolution we’ve been warned about by Musk and Zuckerberg who remain fixated on the specter of job-stealing AI. This misses the ways technology is changing the experience of work.”

Be yourself is a terrible advice


“Just be your true self.” According to the American writer Adam Grant, this is the worst advice many people can get. Giving free rein to your authentic self doesn’t benefit anyone, he found out during an experiment in which he expressed everything he thought for two weeks. He had to stop his experiment because it was unbearable for himself and his environment. The idea of an authentic self hinders personal growth because it implies that there exists such a thing as an authentic self. What works best? According to Grant: acting from the outside in by copying styles that you see around you and experimenting with which one works best.

Cochlear implant? No thanks


Technological progress doesn’t always equal cultural and social progress. The hearing world assumes that a cochlear implant will always result in a happier life. The fact that there is an entire deaf culture that enriches our homogenizing everyone-the-same society, is completely forgotten. This great Dutch documentary is about a guy who doesn’t prefer a cochlear implant and worries that this so-called human enhancement technology might be the end of a rich deaf culture. The assumption that deaf children are always unhappy or isolated and need to be ”fixed” says something about our own simplistic assumptions. 

The end of spontaneity


The more data and knowledge we gather about ourselves, the more we are expected to act responsibly on this knowledge. What “acting responsible” entails, is determined by insuring companies, employers and tech companies. Through big data and algorithms they meticulously monitor our behavior and determine which detours, health risks and spontaneities we are allowed. This gives us less space for deviant and non-goal-orientated behaviour. Do we really dare to live or has spontaneity become a too risky enterprise for our onlife persona?

Authenticiteit buiten de gevestigde culturele orde

Volgens de Franse psychoanalyticus Lacan is onze begeerte nooit volledig in taal uit te drukken en dus ook nooit geheel te bevredigen. Daardoor ontstaat vervreemding. Antropoloog Matthijs van de Port verbindt Lacan’s filosofie over vervreemding aan het thema authenticiteit: “Cultuur is in de antropologie een welcoming place, een thuis, een houvast biedende structuur. Zo niet bij de Lacanianen. Zij benadrukken dat de betekenisgevende structuren waarin mensen zichzelf verankeren vervreemdend, zijn. De eerste bron van vervreemding betreft het gegeven dat het subject van kinds af aan in de mal van de symbolische orde wordt gekneed.