close

Research - 26.06.2024 - 14:00 

When Machines Fail Us

Artificial intelligence is making inroads into many areas of our lives and is supposed to take more and more decisions and tasks off our hands. However, it is often forgotten that the technology is also subject to unintentional biases and wrong decisions. Last Monday, at the public conference "Machines That Fail Us", concerned experts discussed this topic.
How can we counter the flawed effects of AI on our society? This question was discussed by Dr Philip Di Salvo, Lorna Goulden, Ilia Siatitsa and Luca Zorloni (f. l. t. r).

The case of black student Robin Pocornie vividly illustrates how Artificial Intelligence can be racist. Pocornie wanted to take an online exam at the VU University of Amsterdam during the coronavirus pandemic. At the time, the University was using software that had access to the students' webcams to automatically detect attempts at cheating. However, because the software interacted with primarily white people, it did not recognize the black student Pocornie. "No Face found" simply appeared on her screen. Only by shining a lamp directly into the student's face throughout the exam was she finally able to satisfy the AI monitoring system. This is just one of many examples that were discussed at the "Machines That Fail Us" conference at SQUARE which highlighted the dangers and injustices that can emanate from AI applications. "Our technologies are designed in such a way that they can also be wrong. We cannot rely on them. One of the exciting questions is how tech developers and tech companies deal with the fallibility of AI and algorithms. And what the use of AI does to our own human judgment," said Prof. Dr. Veronica Barrassi, who organized the conference together with her team from the HSG's Institute of Media and Communication Management (MCM-HSG). Barassi and her team also launched "The Human Error Project" in 2020, a research project dedicated to these very questions. Various team members presented some publications from the project during the conference, which focused on civil society's fight against algorithmic injustice in Europe and mapping the media discourse on AI errors and human profiling. Some of the topics discussed also concerned the imbalance between tech companies and civil society actors and the fact that AI often perpetuates existing inequalities.

People cannot be standardized

The keynote speech at the conference, which was funded by the Swiss National Science Foundation as part of its Agora project, was given by Stefania Milan, professor of Critical Data Studies at the University of Amsterdam. Using vivid examples, she expressed concerns about the increasing use of surveillance software. "I see this software as a regulating data infrastructure. It takes over functions that were once performed by humans within the state apparatus. With the pandemic, we have seen an acceleration in the use of such infrastructures, often in an undemocratic and a non-transparent context." Other worrying things she mentioned were the still lagging regulation of AI, the performance of government tasks by for-profit contractors, the difficulty for people to opt out of AI applications, and rising energy consumption: "Some projections suggest that by 2027, data centers will account for one-fifth of global energy consumption." In addition, so-called "tech-solutionism" often shows a tendency to ignore the complexity of society by designing its solutions for a standardized average person: "Taking the standardized person as a proxy for the population is a problem because people are not standardized." As an example, Stefania Milan cited the first versions of the contact tracing app in Germany during the pandemic. They were only compatible with the latest smartphone models and therefore excluded large parts of the population who were still using other models. To address all these challenges related to AI, Stefania Milan proposed three fields of action: a robust regulatory framework, ethical guidelines for the development and use of technologies and creating public awareness of their vulnerabilities.

Uniting the resistance

Ways out of the technology trap described above were also discussed on the subsequent panel. Entrepreneur Lorna Goulden believes that the problem can be solved with additional technology, for example in the form of digital tools that give people more control over their own data or help developers of AI applications to implement responsible solutions in line with principles such as transparency and privacy protection. Journalist Luca Zorloni from the digital magazine "Wired" is also focusing on regulation and is calling for something like a statutory publication obligation for algorithms that have a major influence on the public sphere. Ilia Siatitsa from Privacy International appealed to the responsibility of developers to develop their solutions from the perspective of the most vulnerable groups of people affected by the AI application and only then expand them for the general public. All three agreed that resistance to the undesirable developments of AI is currently very fragmented. "It is necessary for all organizations that deal with the topic to come together in order to be able to oppose the big tech organizations more," said Lorna Goulden.

Discover our special topics

north