Research - 10.12.2025 - 12:00
AI is no longer just a technological innovation. It is starting to change work cultures and is affecting everyday workflow. Leaders are now having to find ways to integrate machine intelligence with human intelligence. Researcher Hans Rusinek identifies three rising workplace complications that emerge not from AI itself, but from how we choose to use it.
Companies are pouring billions into generative AI while still waiting for a measurable productivity dividend. In the meantime, many firms are discovering a quieter, more corrosive byproduct that I like to call “workslop”.
It’s the kind of AI-generated output that looks polished, sounds nice, but adds no substance whatsoever to what is being worked on. This means presentations without thought, reports without relevance, messages without meaning. And because someone then has to clean it up, workslop doesn’t just waste time; it strains the social fabric of organisations.
Workslop eats away at trust. After encountering examples of it, 42% of co-workers see their colleagues as less reliable, and 37% even see them as less intelligent, shows a study from Stanfords Social Media Lab. From the creation of new ideas to the correction or editing of work, workslop is not just a sign of “poor work”—it becomes a trust problem. Wherever output looks substantial but isn’t, a quiet culture of scepticism starts to form.
In my research and consulting work with firms across the Europe, I see how quickly AI-enabled “efficiency” turns into a culture of hidden scepticism: colleagues become gatekeepers rather than collaborators, trust erodes, and managerial communication begins to feel synthetic rather than human. When employees perceive workslop as a sign of unreliability, the risk stops being technological and becomes relational.
The core lesson is simple: AI doesn’t poison work—our unreflective use of it does. What needs governance isn’t the model, but our mindset.
AI promises unprecedented efficiency, but efficiency is not the same as productivity. As economist Benedikt Frey of Oxford University reminds us, productivity is the ability to generate new ideas, not to execute old ones faster. Frey argues that a large language model trained in the 19th century would have confidently declared the Wright brothers’ flight impossible; one trained before Galileo would have dutifully repeated a geocentric universe. That is the paradox: AI excels at consensus, not at discovery. At least not on its own.
For half a century we’ve put ever-faster computers into pockets and offices, yet productivity growth has fallen from 2% in the 1990s to 0.8% in the past decade, Frey reminds us. Research productivity has declined too: more scientists, fewer breakthroughs. AI risks amplifying this pattern. It accelerates routines—emails, summaries, micro-tasks—yet the time saved often gets consumed by more of the same.
The real question is whether we use AI to amplify human intelligence —or simply to accelerate the knowledge of yesterday. True productivity lives in the new.
AI is rapidly getting better at the things machines have always been good at: speed, pattern-matching, optimisation. And that is precisely why we need to get better at the things humans are uniquely good at. The real risk today is not that AI will outthink us, but that we will continue treating ourselves like machines — rushing, multitasking, and flattening our own intelligence.
The data is striking. Knowledge workers are interrupted every four minutes and need nine minutes to refocus. Studies that investigated the interruption cost of not just mails or messages but full blown (online) meetings even put the “refocus after a meeting interruption” at more than 20 minutes. Productivity growth has been falling for decades, despite faster devices. Sixty percent of employees feel cognitively underused. In other words, the crisis in intelligence began long before AI arrived.
And this is where AI becomes an unexpected mirror. By taking over the mechanical parts of thinking, it exposes how much of our supposed “smartness” was machine-like all along. That opens a deeper question: what is human intelligence when the machine does the machine part?
Human intelligence is embodied, emotional, intuitive, relational. It begins not with calculation, but with what philosopher Byung-Chul Han of Berlin University of the Arts calls “the first image of thinking”: goosebumps — the moment something touches us so deeply that we must think about it.
AI cannot feel that (because it lacks a body). But we can.
If we use AI well, it could become the catalyst that pushes organisations to reclaim this broader, more alive human intelligence: curiosity, moral judgement, imagination, sense-making.
The irony is that AI may finally force us to remember how intelligent humans can be.
Hans Rusinek is a philosopher, economist, researcher and lecturer teaching “Meaningful Work” at HSG. His work focuses on the future of work and the effect of technology on our daily lives.
More articles from the same category
This could also be of interest to you
Discover our special topics
