Background - 30.05.2025 - 09:00
This is a type of AI that would possess human-like cognitive abilities and would have the capacity to learn. Moreover, AGI would simulate the general intelligence of the human brain and would allow it to perform any intellectual task a human can. We sat down with Philip DiSalvo to discuss the rise and development of AGI.
Dr. Di Salvo, when talking about AGI, I conjure up images of science fiction films. There has always been a lot of hype around the advancement of AI. How realistic is it that we will see AGI in the next ten years?
Hype is a core component of anything AI-related in this historical moment. With Generative AI demonstrating such powerful results, AGI seems to be the next step in the hype cycle within public discourse. Technological developments always need a new horizon to pursue, and with GenAI already showing remarkable capabilities, it’s natural to push the limits even further. If we look at the current news around AI, AGI is a recurring topic, so much so that the MIT Technology Review recently noted it has suddenly become “a dinner table topic.” The problem is that there is no consensus on what AGI actually is or what it will mean, making the debate somewhat muddled, and hype thrives in confusion.
I am not in a position to judge the technical progress of AI, and therefore cannot make predictions about its next steps. But the fact that forecasts vary widely depending on who is making them is a clear sign of how poorly defined the concept of AGI still is, as I noted in a recent piece for RSI. Google and DeepMind claim we’ll see AGI by 2030; OpenAI’s Sam Altman says his company already possesses the necessary tools to achieve it, adding to the opacity. Other CEOs are more cautious, while scientists, though not dismissing the possibility of AGI, tend to be less optimistic about the timeline, some even doubt that GenerativeAI is the right step for building AGI.
Some categorize AGI as a theoretical concept, and others see it as inevitable. For those in the camp that sees AGI as inevitable, what evidence do they see as relevant?
Recently, I was reading Machina Sapiens by Nello Cristianini of the University of Bath, a book that analyzes some of the most recent developments in AI and the surprising behaviors exhibited by Generative AI. I find his approach compelling: he begins by attempting to define what intelligence actually is, before jumping to conclusions about what machines can do and what they might do better than humans. This is precisely where the notion of AGI reveals its limitations. In a polarized and narrow debate, often framed by “optimists” vs “pessimists” and heavily influenced by corporate actors with strong incentives to shape the narrative, I believe a prudent, measured perspective like Cristianini’s is essential. Within this context, I would strongly argue against the idea of technological “inevitability.” No technology is inevitable; all technologies are the result of socio-technical systems, human decisions, and political-economic choices and contexts.
But are all the narratives accurate or even plausible?
The notion of “inevitable” technology is, in fact, central to many of the current debates about AI’s supposed “existential threats”, discussions which, in most cases, serve as distractions from the real, concrete, and already serious harms caused by AI in practice rather than in speculation. While these existential threats are not necessarily tied to AGI specifically, they often stem from even vaguer conceptualizations of AI. Such narratives tend to frame AI as something autonomous, unstoppable, and catastrophic.
Claims that AI will take over, eliminate humanity, cause mass extinction, or destroy democracy are far closer to science fiction than to present-day reality. These narratives are frequently amplified in public discourse for opportunistic reasons and ultimately serve to obscure pressing issues such as algorithmic bias, systemic discrimination, and the environmental costs of AI technologies, among others, which are closely connected with how AI is built today. Kate Crawford’s recent art installation “Calculating Empires” and Karen Hao’s book “Empire of AI” are excellent reality check excercises. Saying this doesn’t mean denying the possibilities of AI, of course, but rather pointing out that the debate is often sidetracked.
What’s your view on the current media and public discourse around AI? Are we overhyping its capabilities or underestimating its risks?
As a media researcher, I believe that media and journalism play a significant role in shaping the debate around AI, often amplifying hype, both optimistic and pessimistic. Through the Human Error Project, led by Prof. Veronica Barassi at the University of St.Gallen MCM, alongside other research initiatives, we examined European media coverage of AI and its “errors”, a broad term referring to the various shortcomings of algorithmic systems, particularly their limitations in “reading” human behaviour and nature. Our findings, which align with other research in the field, show that media coverage still tends to favour sensationalism, especially when drawing comparisons between human and machine intelligence. This tendency often overshadows more urgent and nuanced discussions about the nature of AI and its societal implications, topics that remain largely underrepresented in mainstream reporting.
As a principle, we need to move beyond the binary framing that positions individuals as either champions of AI progress or as adversaries of technology. Such polarization does little to help the public make sense of what AI truly is, how it is being developed, and where it might lead, an outcome that, as we've discussed, remains far from clear. It’s also crucial to remember that AI is a massive commercial enterprise. The most powerful players in this space are engaged in a competitive race to dominate the market - behaviours we've already witnessed in the rise of social media and the digital economy, which led to concentration and quasi-monopolies. If similar dynamics take hold in AI, the consequences could be even more problematic given the immense capabilities of the technology. This is why I advocate for more critical reporting, less uncritical repetition of company statements and visionary, speculative narratives. I love science fiction, but it belongs in books and movies, not on the front page.
Philip Di Salvo is a senior researcher and lecturer at the University of St.Gallen. His main research topics are the relationship between information and hacking, Internet surveillance and artificial intelligence. As a journalist, he writes for various newspapers.
Image: Adobe Stock / Futureaz
More articles from the same category
This could also be of interest to you
Discover our special topics