close

Research - 03.03.2025 - 10:30 

Automation in administration: a case of transparency or a leap in efficiency?

The automation of administrative decisions by software is the subject of heated debate in Switzerland. While some experts hope that it will increase efficiency and reduce error rates, others warn of a lack of transparency and algorithmic arbitrariness. A new book by researchers from the universities of St.Gallen and Lausanne highlights the challenges and opportunities of the technology.
Source: SCS-HSG

In cities such as Bern, Basel and Zurich, administrative staff use the software Citysoftnet for digital case processing in social welfare and child protection. However, the use of the software is not without controversy. Media reports, including articles in the NZZ, have recently highlighted technical deficiencies and delays in the payment of social benefits. Citizens often do not understand why applications are rejected or delayed – a problem that is becoming a matter of trust in an increasingly digitalised administration. 

In this context, “computational law”, i.e. the coding of legal norms in software, has long been more than just an experiment. From India to Denmark and the Netherlands, to Michigan in the US and New Zealand: administrative processes are being made more efficient and objective by having software automatically check whether someone is entitled to social assistance, for example – the basis for this is the implementation of a legal issue in the form of software code. However, if citizens cannot understand how decisions are made, there is a risk of a problem with acceptance. 

With a view to the USA and Elon Musk's vision: AI instead of administration? 

The discussion about “computational law” is gaining additional momentum from a radical vision of Tesla and SpaceX boss Elon Musk. According to media reports such as in the Tagesschau and the Tages-Anzeiger, he announced that he would replace US administrative employees with artificial intelligence (AI). AI-based automation could also lead to significant staff cuts in public administration – a scenario that not only calls into question jobs, but also state control mechanisms. Musk argues that AI is cheaper, faster and less prone to error than human bureaucrats. Critics, on the other hand, warn that a complete lack of human judgement could lead to inhumane decisions: algorithms evaluate data, but they do not understand individual fates. 

How responsible does administrative software need to be? 

Researchers at the School of Computer Science at the University of St.Gallen (HSG) and the University of Lausanne (UNIL) have been investigating the use of AI in public administration – in particular in the area of so-called “Legal AI”. They identify various technical, legal, political and educational challenges and analyse them in detail:

Technical challenges:
In recent decades, there has been significant progress in implementing legal norms as software. Nevertheless, central problems remain. Laws often allow multiple legitimate interpretations, which may be valid depending on the argumentation. Mapping this ambiguity in software remains a challenge. In addition, there is a lack of the necessary flexibility to adequately take into account changes in laws and their interpretation.

Legal challenges:
To ensure that the decisions of such systems remain comprehensible, it would be essential to disclose the underlying processes – in particular, to be able to review them and, if necessary, challenge them. However, it is precisely this transparency that is often lacking. This not only makes it more difficult to protect against systematic distortions (e.g. against particularly vulnerable groups), but also increases the risk of public scandals. Adherence to certain ethical standards could counteract this.

Political challenges:
The use of “Legal AI” in public administration raises many political and democratic issues. For example, numerous laws are difficult for laypeople to understand – contrary to the legal maxim “Ignorance of the law does not protect against punishment”. However, AI could also be used to make the law more accessible. This raises the question of whether the state should invest in applications that facilitate access to the law, rather than primarily focusing on increasing efficiency. Another topic concerns the adaptation of laws to digital administrative processes: should the state formulate laws during the legislative process in such a way that they can be more easily implemented in software (‘digitally-ready legislation’) – for example, by avoiding ambiguity – and is this desirable and feasible?

Educational policy challenges:
As Legal AI has an increasing influence on our daily lives, it is important to ensure the democratic participation of citizens. This includes integrating basic knowledge of computational law into school curricula – at both primary and secondary school level, as well as in higher education. 

Efficiency or lack of transparency?

As it turns out, the use of AI in public administration is not purely a question of efficiency. Transparency, ethical guidelines and clear responsibilities are essential to protect civil rights. At the same time, algorithms – if used responsibly – could help to reduce administrative costs and make processes fairer. 

The question remains: is the propagated focus on the automation of legal and administrative processes through AI the only right way? Or does this open the floodgates to a cold, incomprehensible, distant and non-transparent bureaucracy? The coming years will show whether transparency measures and legal frameworks can keep pace with technological change – or whether Musk's vision of an AI-driven administration will ultimately become reality after all. The researchers have made their current findings on these topics – in particular on the four challenges mentioned above, from technology to education policy – available to a wider audience in a new book entitled ‘AI and Law: How Automation is Changing the Law’. In the book, they not only discuss the challenges mentioned above, using concrete examples of ‘Legal AI’ in use worldwide, but also identify pressing issues and provide specific guidelines and suggestions for the responsible use of AI in administration and for the educational preparation of future generations for ‘computational law’.


The authors of the book ‘AI and Law: How Automation is Changing the Law’ are namely Dr Clement Guitton, a postdoc at the School of Computer Science (SCS-HSG) at the University of St.Gallen, Prof. Dr. Aurelia Tamò-Larrieux, Associate Professor of Private Law at the University of Lausanne (UNIL), and Prof. Dr. Simon Mayer, Full Professor of Interaction- and Communication-Based Systems at the School of Computer Science (SCS-HSG). 


Image: Adobe Stock / Andrey Popov

Discover our special topics

north