Trust in digital technologies is an "error"

Professor Peter Schaber's team from the Ethics Center at the University of Zurich addressed the issue of trust in digitalisation processes and suggests adopting a different stance towards digital technologies.

The processes of digitalisation create uncertainties. Nevertheless, we want to benefit from them. Trust in technology is therefore central, according to a common argument. But can we truly trust digital applications at all? The research team led by Prof. Peter Schaber from the Ethics Centre at the University of Zurich asked themselves these and other questions.

The research project pursued two objectives:

  1. Are digital structures such as algorithms trustworthy, and how should we behave towards them if not?
  2. How does digitalisation affect our non-digital relationships, and how should digital processes be designed to maintain or strengthen trust?

The most important finding

The Zurich research team demonstrated that speaking of "digital trust" is a categorical mistake: In the context of things, the concept of 'trust' does not make sense. Instead, we should rather strive for the reliability of digital technologies.

Regarding the second objective, Professor Peter Schaber's research team focused on the consequences of social media for the functioning of democratic systems. "Democratic trust" is defined as the citizens' ability to adopt the perspective of fellow citizens and to communicate their own political position in an understandable manner. Against this backdrop, the researchers were able to demonstrate that digital communication, and social networks in particular, can weaken democratic trust (See: Three main messages).

Importance for politics and practice

As a result of their study, the researchers encourage a discussion on the level of reliability required in various contexts. This could influence the regulation of technology.

The theoretical insights gained on the topic of trust in digital processes were applied by the researchers to specific areas. Thus, two recommendations for practice from Professor Peter Schaber are as follows:

“For physicians and medical personnel it is far more important to care about the interpersonal relationships with patients than trying to understand how a particular machine-learning algorithm works.“

“Software engineers caring for the well-being of users (and the society at large) are well-advised to sometimes sacrifice algorithmic efficiency in order to allow for direct and personal interactions that do not reduce users to clusters of particular properties.“

Three main messages

  1. It is a categorical mistake to claim that one trusts a digital artefact, at best we can rely on such artefacts. Taking this attitude helps us to prevent dangerous antropomorphisations that only benefit the industry.
  2. In the field of medicine, we should welcome the use of AI algorithms, particularly in the context of medical diagnosis, but since our main goal with respect to this kind of technology is to ensure its reliability we should lead open-ended discussion about what we want (or do not want) from medical AI.
  3. Digital communication tools can strengthen existing trust relationships, especially relationships between citizens, by enabling non-instrumental user interaction that reduces anonymity and abstraction.

For detailed information on the researchers' methodology and further background on the research project, please visit the NRP 77 project website.

Is it wise to blindly trust digital applications?

Additional research projects on the topic of "Digital Transformation" within the framework of the National Research Programme NRP 77 can be found here:

Project overview