GPT-3 understands nothing
GPT-3 is the largest artificial neural network ever created. It consists of 96 layers of artificial neurons and 175 billion connections, and has been trained with all texts on the internet written between 2016-2019, the English Wikipedia and 45 terabytes of books. As a result, it can write texts in various genres, from poems to academic papers. Impressive, but as soon as GPT-3 is released into the real world -the unpredictable messy world of physical objects and human logic- it is a lot less impressive. Researchers typed for example the sentence: “Yesterday I left my clothes at the dry cleaner and I have yet to pick them up. Where are my clothes?” to which GPT-3 replied “I have a lot of clothes.” And when researchers asked GPT-3 to finish a story about organizing a dinner at home, it suggested to remove the door and use a table saw to cut the door in half. A sophisticated AI-system that gives such stupid answers reminds us of the fact that GPT-3 simply predicts the next word in a sentence based on what it has seen before, without any understanding of the context in which our words acquire meaning. GPT-3’s knowledge is completely disconnected from the underlying reality. No matter how good AI becomes at generating human-like text, it understands nothing. But maybe humans also don’t understand anything, especially in a world that becomes more technologically complex and teaches us to blindly trust AI-systems because “AI knows best”. So, AI is not becoming more ”human-like”, maybe it is the other way around: humans are becoming more like AI.