Fake me hard


Last weekend I visited the FAKE ME HARD exhibition which explores the reality of technology through futuristic installations, performances and debates. What struck me: the Kurzweilian AI-perspective was everywhere. Even our artistic imagination seems unable to truly transcend the capitalist AI-ideology of hardcore dataism and a mechanistic interpretation of intelligence.

AI criticism shouldn’t mean that we have to unquestioningly accept Silicon Valley’s AI marketing discourse before we can criticize it. AI criticism should mean: envisioning diverse futures beyond the AI-takeover hypercycle as well as beyond the ‘’human-centered AI” cliché. It should question the misleading PR story of “intelligence explosions” and AI-systems that “understand” us or “know us better than we know ourselves” rather than fuel these AI-industry driven perspectives.

However, I was impressed by the exhibition. Especially its broad perspective on what ‘’fakeness’’ means was very good. And there were some great works to discover. I was truly touched by Liam Young’s Planet City. The idea in short: the entire world population lives in a giant city occupying a fraction of the earth’s surface, freeing the rest of the world for rewilding and the return of stolen lands.

For the first time I came across a speculative image of the future which didn’t make me feel disgusted, angry, depressed or alienated. Although hyper density has unlikable aspects, I like Planet City for its ambiguity, sociological-reflective potential, and its warm colors and cultural imagination. Google Smart City (and other creators of deadly Brave New World tech-dystopia’s) eat your heart out.

To avoid confusion


“Nothing human is strange to me”. We can now remove the word ”nothing”. How to get rid of human doubt, nuance, curiosity, social friction, context and ambiguity? How to get rid of our own personal journey for meaning and interpretation? Let Facebook categorize all our content: satire / no satire / joke / no joke / right / wrong / zero / one. Computers say yes, computer says no. We are becoming robots. “To prevent confusion”. No, imagine… confusion. It could lead to a good conversation or personal development. If this is what ”taking responsibility” for your platform looks like, in that case: no thanks.

Building AI in the image of man


Great article about the dead end of current AI. Sadly, these insights lead to the conclusion that we have to build AI in the exact image of human intelligence. I believe this is a waste of time, energy, money and resources for many reasons:

  • The false promise of ”human-like” AI or AGI only benefits a small tech-elite.
  • It is pointless and nihilistic to outsource being human to machines. It is a waste of time, money and resources we could better invest in a sustainable planet.
  • An intelligent collaboration with AI requires complementary traits, since there is no point in teamwork when all actors possess similar qualities.
  • “Human-like” AI demonstrates a lack of creativity and overdose of narcissism, self-centredness and anthropocentric self-interest.
  • The idea of AI as representation of human-like intelligence obscures the idealization of a mechanistic view of man by defining what it means to be human on the measly basis of what we cannot duplicate in AI.
  • Te idea of “human-like” AI obscures the dataistic, surveillance capitalistic ideology that shapes our self-understanding, behaviour and judgement of AI systems in such a way that it increases asymmetries in power.
  • Instead of AI we should be more inspired by nature’s intelligence. We don’t need more copy’s of ourselves, we need something less destructive and more innovative.

AI “emotion detection”


False and overblown AI claims, I found a nice one: “A Chinese AI emotion-recognition system can monitor facial features to track how they feel. Don’t even think about faking a smile. The system is able to analyze if the emotion displayed is genuine.”

  • These headlines keep popping op. However, scientific studies have proved that emotion-recognition systems are built on pseudoscience. You cannot infer how someone feels from a set of facial movements. Facial expressions are –not– linked to inner emotional states and do –not– reliably correspond to emotions.
  • Although Business Insider mentions this and interviews a social scientist at the end of the article, their headline and abstract suggests something else. AI-companies can make these false claims, not only because of uncritical clickbait headlines, but also because the buyers of these systems are often not that well educated in social science.
  • The AI-hype outstrips the reality, as the self fulfilling prophecy effects of these emotion detection systems are already out there. For example in online application systems that lead to self-censorship and policing your own facial expressions to game the system.
  • If AI emotion detection systems work, it’s not because AI has become ”better” at recognizing emotions, it’s because we start to behave like simple machines with exaggerated facial expressions that can easily be read by machines.

GPT-3 understands nothing


GPT-3 is the largest artificial neural network ever created. It consists of 96 layers of artificial neurons and 175 billion connections, and has been trained with all texts on the internet written between 2016-2019, the English Wikipedia and 45 terabytes of books. As a result, it can write texts in various genres, from poems to academic papers. Impressive, but as soon as GPT-3 is released into the real world -the unpredictable messy world of physical objects and human logic- it is a lot less impressive. Researchers typed for example the sentence: “Yesterday I left my clothes at the dry cleaner and I have yet to pick them up. Where are my clothes?” to which GPT-3 replied “I have a lot of clothes.” And when researchers asked GPT-3 to finish a story about organizing a dinner at home, it suggested to remove the door and use a table saw to cut the door in half. A sophisticated AI-system that gives such stupid answers reminds us of the fact that GPT-3 simply predicts the next word in a sentence based on what it has seen before, without any understanding of the context in which our words acquire meaning. GPT-3’s knowledge is completely disconnected from the underlying reality. No matter how good AI becomes at generating human-like text, it understands nothing. But maybe humans also don’t understand anything, especially in a world that becomes more technologically complex and teaches us to blindly trust AI-systems because “AI knows best”. So, AI is not becoming more ”human-like”, maybe it is the other way around: humans are becoming more like AI.

Applying for a job with a fake smile


Selecting the right candidate for a job is increasingly outsourced to computers. Algorithms evaluate video applications for word choice, voice and micro-expressions that would betray our “true” emotions. Together with designer Kiki Mager SETUP developed the application installation Input ≠ Output. Of course the system could not recognize my fake smile. In the future, will we adapt our words, voices and expressions to algorithmic standards?

Workers managed by robots


We should avoid AI fetishism and futuristic science fiction narratives because it obscures our view of real AI problems. Josh Dzieza wrote a great and alarming article about algorithmic management systems in the workplace:

  • “While we’ve been watching the horizon for self-driving trucks, the robots arrived in the form of the supervisor and the middle manager. Robots are watching over hotel housekeepers, telling them which room to clean and tracking how quickly they do it. They’re managing software developers, monitoring their clicks and docking their pay if they work too slowly. They’re listening to callcenter workers, telling them what to say. To satisfy the machine, workers felt they were forced to become machines themselves.”
  • “Companies that pursue algorithmic management all take on a similar form: a large pool of poorly paid workers at the bottom; a small group of highly paid workers who design the software that manages them at the top.”
  • “This is not the industrial revolution we’ve been warned about by Musk and Zuckerberg who remain fixated on the specter of job-stealing AI. This misses the ways technology is changing the experience of work.”

Be yourself is a terrible advice


“Just be your true self.” According to the American writer Adam Grant, this is the worst advice many people can get. Giving free rein to your authentic self doesn’t benefit anyone, he found out during an experiment in which he expressed everything he thought for two weeks. He had to stop his experiment because it was unbearable for himself and his environment. The idea of an authentic self hinders personal growth because it implies that there exists such a thing as an authentic self. What works best? According to Grant: acting from the outside in by copying styles that you see around you and experimenting with which one works best.

Cochlear implant? No thanks


Technological progress doesn’t necessarily equal cultural and social progress. The hearing world assumes that a cochlear implant will always result in a happier life. The fact that there is an entire deaf culture that enriches our homogenizing everyone-the-same society, is completely forgotten. This great Dutch documentary is about a guy who doesn’t prefer a cochlear implant and worries that this so-called human enhancement technology might be the end of a rich deaf culture. The assumption that deaf children are always unhappy or isolated and need to be ”fixed” says something about our own simplistic assumptions.