Research - 11.04.2023 - 16:00 

AI and language: GPT-3 shows human-like language patterns

GPT-3 uses human-like psychological mechanisms in the generation of language. The semantic activation that occurs in the human brain is being reproduced by GPT-3. These findings by Jan Digutsch (University of St.Gallen) and Michal Kosinski (Stanford University) were first published in renowned science magazine NATURE.

Artificial intelligence (AI) that expresses itself linguistically makes people listen. So what happens when AI searches for the right words? One key to answering this question lies in what’s called semantic activation. A new study on GPT-3 sheds light on this. GPT-3, the basic architecture for ChatGPT, continues to surprise both users and researchers alike.

Semantic activation

Semantic activation occurs when the brain automatically thinks of related terms when hearing or reading words. When asked "What do cows drink?" For example, humans are more likely to respond with “milk” than “water” (only calves drink milk), showing that “cow” and “drink” activate the word “milk” in semantic memory. GPT-3 now shows similar semantic activation patterns as humans, albeit with a significantly higher activation for related word pairs (e.g. “lime–lemon”) than for unrelated pairs (e.g. “tourist-lemon”).

Although GPT-3 can display human-like language patterns, semantic activation in GPT-3 is determined more by the similarity in word meaning (“lime–lemon”, “day–night”) than by the association of words (word pairs) that are frequently used together (e.g. “go away" or "search under"). It is becoming apparent that the more advanced the language models become, the more pronounced the differences to humans become. It can therefore be assumed that the result patterns outlined for GPT-4, the not yet publicly available follow-up version of GPT-3, will be even stronger.

Additional benefit

On the one hand, the findings show how studying language models such as GPT-3 could help improve our understanding of human language because language models are trained to imitate human behaviour (here: language). They could also be used as model participants in psycholinguistic studies. Such studies help to better understand how language is processed and represented in the human brain.
It is also helpful for studies that language models, unlike humans, do not suffer from fatigue and lack of motivation. You can run thousands of tasks per minute. However, research into language models could also promote an understanding of the mechanisms and processes in the human brain, because artificial neural structures are easier to study than biological ones.


It should be noted that the behaviour of language models sometimes differs significantly from that of humans – despite the first superficial similarity. Therefore, it is difficult to use language models like GPT-3 to study human behaviour or mental representation, since the underlying AI mechanisms partly differ from human ones. However, the differences in artificial and human language production can provide valuable clues for the further development of those language models in order to bring language generation even closer to that of humans. What makes the study of AI and language so intriguing is the notion of convergent evolution – the phenomenon that unrelated species such as birds and bats independently evolved a common trait (here: wings) to meet similar environmental challenges get over. It is possible that we are currently experiencing a similar development between humans and AI.

The article "Overlap in meaning is a stronger predictor of semantic activation in GPT-3 than in humans" on the website of the science magazine NATURE is available for download. Further information on the Institute of Behavioral Science and Technology (IBT-HSG):

Image: Adobe Stock / peshkova

Discover our special topics