Research - 06.05.2026 - 10:30
The new “Voices of the Leaders of Tomorrow” report, or VOLOT 2026 for short, a collaboration of the Nuremberg Institute for Market Decisions and the St. Gallen Symposium, provides an answer that is as sober as it is uncomfortable. For the study, 585 young talents from around the world and 100 top managers from large companies were surveyed. The result: agreement on the potential of AI, but clear differences on the question of what it should be used for. “The real tension therefore lies not in the “if”, but in the “how”. The respective visions of the future held by leaders will help determine how organisations will change as a result of AI,” says Blagoy Blagoev, full professor of Organisation at the University of St.Gallen (HSG). The expert on New Work regularly collaborates with industry partners on topics such as leadership skills for the digital world, organisational structure and design, change and transformation management, and organisational agility.
Both groups surveyed expect AI to fundamentally change the world of work: a clear majority anticipate far-reaching impacts. At the same time, the survey reveals a surprisingly broad consensus on what “good” AI should achieve: more time for meaningful work, better decisions, faster learning. This is where the difference lies. Whilst experienced managers focus more on decision-making quality and customer benefit, young talents place greater emphasis on learning ability and creativity. For them, AI is not an accelerator of existing processes, but a lever for better work.
This perspective is also reflected in strategic preferences: a clear majority of young managers favour an “augmentation-first” approach. AI should empower people, not replace them. Pure automation remains a marginal phenomenon in both groups. For HSG expert Antoinette Weibel, this is no coincidence: “It makes an enormous difference whether companies use AI to improve work – or instrumentally, to replace or monitor people.” The intention behind its use becomes the decisive controlling factor. Or, as the study suggests: AI should follow human values and be guided by a “Human North Star”. Good intentions alone are not enough, emphasises Antoinette Weibel. What is crucial, says the HSG professor of human resource management, is what she calls ‘prefigurative structuring’: the conscious shaping of conditions before AI is even deployed. ‘Organisations that use AI successfully do not simply react – they structure in advance: governance, participation and clear boundaries for the use of AI.’
The study also shows that support for AI is fragile. While both groups see great potential in analysis, knowledge work and the reduction of routine tasks, support wanes as soon as key resources come under pressure. For instance, more than half of young talents cite the loss of skills as the greatest risk. For top managers, data protection is the top priority. In both groups, the same applies: as soon as autonomy diminishes or decisions become opaque, acceptance of AI drops significantly. “Managers often underestimate the human side of technology. The perceived loss of autonomy can quickly lead to resistance and blockages that are difficult to overcome,” says Blagoev.
This is particularly evident when it comes to trust in collaboration. A topic that also occupies leadership expert Antoinette Weibel in her research.
Only a small minority of young managers today place their full trust in companies when it comes to the use of AI. At the same time, a large majority demand clear lines of responsibility, including the ability to override decisions at any time. This is also evident when it comes to trust: “As soon as employees feel overwhelmed, trust collapses,” notes Weibel.
Many companies rely on the ‘human-in-the-loop’ principle. For Weibel, this is an inadequate approach: “What matters is not that a human is involved in a task, but that they fulfil their duty to make good decisions – and to override the machine when it matters.” This requires something that is still lacking in many organisations: leaders who can withstand tension. Leaders who recognise when data is not enough. And who do not delegate responsibility to systems. “Otherwise, the machine becomes a shield – and responsibility disappears, just like the eroding trust in human capabilities.”
From the respondents’ perspective, what builds trust is surprisingly concrete: transparency about where AI is being used. Clear responsibilities. And above all: genuine human control. More than 80 per cent of young leaders demand that a responsible person be able to override AI decisions at any time. Particularly in sensitive areas such as HR decisions or healthcare, the final decision should remain with humans. At the same time, almost half advocate for new forms of governance, not just by companies, but in collaboration with the state and society.
It is surprising where respondents see the greatest shortcomings of AI in everyday working life and, in particular, in team management. Not in the tools themselves, but in their application. Only around a quarter of the young talents who took part in the survey currently consider organisations to be sufficiently equipped to manage AI-supported decisions responsibly. The biggest bottleneck: the ability to even recognise errors, biases or unintended consequences.
The most important demand is therefore clear: training. For almost two-thirds of junior managers, training is the key to successful AI deployment, ahead of infrastructure or strategy documents. “Any company can have access to AI today,” says Blagoev. “But few know how to use the technology wisely and in a way that generates value.”
The generational difference is most evident when it comes to the question of what should be done with the productivity gains. The answer is clear: reinvest. Around 87 per cent of young talent and over 80 per cent of top managers are in favour of investing the gains in employees, for example through further training, better jobs or new career paths. The classic reflex to cut costs finds little support. Specifically, young managers prioritise above all the redesign of work: more autonomy, more learning, greater use of human strengths. Staff cuts rank far behind at the bottom of the list of options. “Acceptance is not created by promises, but by a credible distribution of the benefits of new technologies. Anyone who translates efficiency gains exclusively into staff cuts – as current developments in the insurance industry suggest – squanders the trust they urgently need for the next, deeper wave of AI transformation,” says Blagoev.
The VOLOT Report and the analysis by Blagoev and Weibel make it clear: AI is not measured by its computing power, but by the decisions companies make regarding its use. It remains a leadership issue. Antoinette Weibel points out: “AI often replaces thinking rather than improving it. This tendency towards so-called “AI sloppiness” is likely to intensify. Two human abilities are at stake here: firstly, the ability to develop one’s own thought processes. AI takes this process off our hands – and thereby weakens it in our brains. Secondly: practical wisdom. That is, the ability to make good decisions in complex, contradictory situations. This judgement arises from experience and cannot be automated.”
Conclusion: The next generation is watching closely. And does not want to see AI used thoughtlessly or in a way that undermines trust.
To the 'Voices of the Leaders of Tomorrow' report (VOLOT 2026)
