Research - 11.11.2025 - 09:00
According to the latest Cyber Concerns Monitor 2025, 78 % of the Swiss population considers cybercrime to be a serious problem. This means that digital security is given the same weighting as other everyday concerns, such as rising health insurance premiums or retirement provisions. There is particular concern about attacks on critical infrastructure, digital fraud and disinformation. The focus is on child protection: 80% of respondents are in favour of a social media ban for under-16s; many parents feel overwhelmed by the digital threat. SRF also reported that the majority would like to see stricter rules on the use of social media – especially to protect children and young people.
Three new research projects at the Institute of Computer Science (ICS-HSG) at the University of St.Gallen are addressing these developments. They aim to help make the digital space safer, more open and more socially responsible.
Dominant online platforms such as Meta's Instagram or Bytedance's TikTok have enormous market power – and thus a profound influence on freedom of expression and democracy. The project “Addressing the Root Cause of Systemic Risk Posed by Dominant Online Platforms: Integrating Computer Science and Legal Scholarship to Measure and Mitigate Concentration of Control and Data (CoCoDa)” examines these structural risks together with experts from the fields of law and computer science.
Computer science and law have so far mostly responded in isolation to the challenges of platform dominance. The CoCoDa project combines both disciplines to develop technically feasible solutions that comply with data protection regulations. The task is challenging: legal transparency requirements clash with systems whose algorithms change weekly, whose data structures are confusing or encrypted, and whose interfaces allow only limited insight. Researchers must reconcile legal requirements such as data protection and data minimisation with technical realities such as dynamic algorithms and a lack of standards.
This is crucial for society: only with such tools can journalists, authorities and users understand how platforms control information – and whether children, minorities or democratic debates are adequately protected. The project is funded by the Swiss National Science Foundation (SNSF) with CHF 1,118,716 over five years starting in June 2025.
Objectives and societal benefits:
Researchers from various universities are involved in the project:
Research results that sound like music – initial findings as a playlist:
The initial findings from “CoCoDa” are available in an original form. With the help of artificial intelligence, the research team has processed key findings into a music playlist of six songs – creative, critical and publicly accessible at the same time:
More and more parents and experts are concerned that children and young people are coming into contact with sexualised, violent or radicalising content on social networks. According to an SRF report, a clear majority would like to see much stricter rules for platforms and more protection for minors. Political pressure is also growing. In autumn 2025, the European Commission initiated several proceedings against large platforms for allegedly doing too little to combat dangerous content tagesschau.de). At the same time, voices in Switzerland are calling for effective age checks to finally be introduced on the internet – a point that the NZZ recently described as “long overdue”. While awareness campaigns help, there is still no protection mechanism in place that provides effective feedback to the children affected.
The ”Flag&Safe” project aims to change this with a ‘trusted flagger platform’ tailored specifically to children and young people. Funded by the Palatin Foundation with CHF 72,935, the project started in June 2025 and will run until May 2026.
Objectives and social benefits:
The following are involved in the project:
Social debates are not only about facts, but also about values, opinions and perspectives. Artificial intelligence has so far struggled with this complexity. The project “Reconceiving and Improving Multi-Perspectivality and Rationality in Argumentative AI (M-Rational)” is developing new concepts so that AI can not only draw logical conclusions, but also represent different points of view in a comprehensible way. The SNF is supporting the project with £886,297 for three years from September 2025.
Objectives and societal benefits:
The following researchers are involved in the project:
