close

Research - 11.11.2025 - 09:00 

SNSF and Palatin Foundation fund projects on artificial intelligence, child protection and platform control

Three new research projects at the University of St.Gallen (HSG) are investigating the risks of large online platforms, the protection of children on the internet, and the development of argumentative, perspective-capable artificial intelligence. They are being funded by the Swiss National Science Foundation (SNSF) and the Palatin Foundation with a total of around CHF 2 million.
Source: SCS-HSG

According to the latest Cyber Concerns Monitor 2025, 78 % of the Swiss population considers cybercrime to be a serious problem. This means that digital security is given the same weighting as other everyday concerns, such as rising health insurance premiums or retirement provisions. There is particular concern about attacks on critical infrastructure, digital fraud and disinformation. The focus is on child protection: 80% of respondents are in favour of a social media ban for under-16s; many parents feel overwhelmed by the digital threat. SRF also reported that the majority would like to see stricter rules on the use of social media – especially to protect children and young people. 

Three new research projects at the Institute of Computer Science (ICS-HSG) at the University of St.Gallen are addressing these developments. They aim to help make the digital space safer, more open and more socially responsible. 
 

Project CoCoDa: Concentration of Control and Data on systemic platform risks  

Dominant online platforms such as Meta's Instagram or Bytedance's TikTok have enormous market power – and thus a profound influence on freedom of expression and democracy. The project “Addressing the Root Cause of Systemic Risk Posed by Dominant Online Platforms: Integrating Computer Science and Legal Scholarship to Measure and Mitigate Concentration of Control and Data (CoCoDa)” examines these structural risks together with experts from the fields of law and computer science.  

Computer science and law have so far mostly responded in isolation to the challenges of platform dominance. The CoCoDa project combines both disciplines to develop technically feasible solutions that comply with data protection regulations. The task is challenging: legal transparency requirements clash with systems whose algorithms change weekly, whose data structures are confusing or encrypted, and whose interfaces allow only limited insight. Researchers must reconcile legal requirements such as data protection and data minimisation with technical realities such as dynamic algorithms and a lack of standards.  

This is crucial for society: only with such tools can journalists, authorities and users understand how platforms control information – and whether children, minorities or democratic debates are adequately protected. The project is funded by the Swiss National Science Foundation (SNSF) with CHF 1,118,716 over five years starting in June 2025. 

Objectives and societal benefits: 

  • Interdisciplinary analysis of existing legal and technical approaches to platform data access
  • Development of integrated “techno-legal” methods based on the EU Digital Services Act, which is currently also being discussed in Swiss politics, especially since the publication of the preliminary draft of the law on communication platforms and search engines at the end of October 2025
  • Practical testing in the areas of social media and app stores 
  • Establishment of an open research platform to strengthen civil society, regulation and science 
  • Report, article, piece on fact-based, democratically legitimised platform regulation

Researchers from various universities are involved in the project: 

  • Prof. Dr. Simon Mayer and Luka Bekavac, University of St.Gallen 
  • Assoc. Prof. Dr. Aurelia Tamò-Larrieux and Alice Palmieri, University of Lausanne 
  • Prof. Dr. Gijs van Dijck, Ass. Prof. Dr. Konrad Kollnig and Henry Tari, University of Maastricht 
  • Prof. Dr. Elena Simperl, Jake Stein and Sophia Worth, Open Data Institute London 

Research results that sound like music – initial findings as a playlist: 

The initial findings from “CoCoDa” are available in an original form. With the help of artificial intelligence, the research team has processed key findings into a music playlist of six songs – creative, critical and publicly accessible at the same time: 

  1. The song “Code of Control” formulates the core concern of the project: law and computer science must work together to find ways to make the power of large platforms more transparent and controllable. 
  2. Soap the Bubble” shows how the diversity of topics in personalised recommendations on social networks is rapidly narrowing. The analysis tool SOAP (System for Observing and Analysing Posts), developed at the University of St.Gallen, reveals how quickly information spaces become one-sided – and how transparency can help break down filter bubbles. 
  3. Flag & Safe” addresses the protection of children from harmful content; inspired by the project of the same name with the Halden primary school in St.Gallen. 
  4. Filters on Filters” describes how researchers have only limited access to platform data despite the EU Digital Services Act. 
  5. The Word That Isn’t There” highlights how platforms downplay or linguistically obscure risks in official reports. 
  6. "Terms of Fairness” concludes with a call for digital justice: for fair data flows, open systems and a design that gives control back to users – based on feedback from the CoCoDa team on the new EU Digital Fairness Act.
“With this playlist, we show that science can also be heard – as an invitation to think about complex topics in a different way.”
Prof. Dr. Simon Mayer, Institute of Computer Science (ICS-HSG)

Flag&Safe project: better protection for children using digital platforms 

More and more parents and experts are concerned that children and young people are coming into contact with sexualised, violent or radicalising content on social networks. According to an SRF report, a clear majority would like to see much stricter rules for platforms and more protection for minors. Political pressure is also growing. In autumn 2025, the European Commission initiated several proceedings against large platforms for allegedly doing too little to combat dangerous content tagesschau.de). At the same time, voices in Switzerland are calling for effective age checks to finally be introduced on the internet – a point that the NZZ recently described as “long overdue”. While awareness campaigns help, there is still no protection mechanism in place that provides effective feedback to the children affected. 

The ”Flag&Safe” project aims to change this with a ‘trusted flagger platform’ tailored specifically to children and young people. Funded by the Palatin Foundation with CHF 72,935, the project started in June 2025 and will run until May 2026. 

Objectives and social benefits: 

  • Development of a child-friendly platform through which problematic content can be easily reported – with the active participation of children in the design process 
  • Automatic review of reports – with forwarding of particularly urgent cases 
  • Integration of a network of experts in Switzerland, for example from school psychology or media education, to provide local support to affected children 
  • Establishment of a rapid response service for children in acute crisis situations 
  • Workshops at Swiss schools on the topics of social media, algorithmic recommendations and online filter bubbles 
  • Europe-wide support for the implementation of new EU regulations (EU Digital Services Act) on child protection online  

The following are involved in the project:  

 

Project “M-Rational”: How AI can learn to argue from multiple perspectives 

Social debates are not only about facts, but also about values, opinions and perspectives. Artificial intelligence has so far struggled with this complexity. The project “Reconceiving and Improving Multi-Perspectivality and Rationality in Argumentative AI (M-Rational)” is developing new concepts so that AI can not only draw logical conclusions, but also represent different points of view in a comprehensible way. The SNF is supporting the project with £886,297 for three years from September 2025. 

Objectives and societal benefits: 

  • Development of AI systems that recognise and understand different opinions and perspectives 
  • Combining technical development with philosophical approaches from language and scientific theory 
  • Developing methods to test how good AI is at dealing with different arguments 
  • Promoting AI that, instead of thinking in terms of true or false, can weigh up different perspectives and present them in a differentiated and reasoned manner 

The following researchers are involved in the project: 

  • Prof. Dr Christina Niklaus, University of St.Gallen 
  • Dr Reto Gubelmann, Digital Society Initiative, University of Zurich 
  • Dr André Freitas, Idiap Research Institute, Martigny
north