Regulating AI: discussing the latest NRP 77 research findings

© advocacy

The fourth and final dialogue event was all about regulating artificial intelligence. Can the legal, ethical, economic and technical aspects of AI be regulated under existing frameworks?

Altogether, 11 of the 46 NRP 77 “Digital Transformation” projects focussed on artificial intelligence (AI) in one form or another. AI is not a new concept and has been in development for more than 60 years. However, the launch of ChatGPT in November 2022 has made the possibilities offered by AI much more visible. The question therefore arises as to how this technical transformation should be regulated (laws and regulations, etc.). The Federal Administration is currently drawing up a review on the subject of AI and defining the relevant regulatory requirements.

Dr Markus Christen - Managing Director of the Digital Society Initiative (DSI) - at the NRP 77 dialogue event on the topic of AI © advocacy

In collaboration with the Federal Office of Justice, NRP 77 held a dialogue event in Bern on 26 November 2024, garnering considerable interest. For example, the two following NRP 77 research projects were presented and discussed:

  • Digital transformation is bringing together digital tools and people more than ever before and merging them into a veritable team. In their NRP 77 project, Thomas Burri and Markus Christen examined the interaction between people and AI. The main question here is how humans can have control over an AI-supported process and what impact this has on how it is regulated. Or to put it another way, how can we regulate self-driving cars? The researchers proved that the test subjects were more likely to trust a team consisting of humans and an AI tool than they were an AI tool or humans alone. Equally, humans also risk being scapegoated in the event of AI mistakes.
  • Daniel Kettiger and his team investigated the anonymisation of court decisions. By law, a large number of court decisions are published but are anonymised to prevent the people involved from being identified. AI can be used in two areas of this process. Firstly, AI is increasingly applied to anonymise these decisions, which are only then checked by a human. On the other hand, published court decisions can be fed into an AI model to re-identify the people involved. The research has highlighted that, despite the availability of good programmes that enable automatic anonymisation, users remain reluctant to use these tools. A further finding was that the risk of re-identification is low with the existing AI tools. However, this is just a snapshot of the current situation. Technical developments are moving at such a pace that this finding may not apply in a few years’ time.

During the discussion, Markus Christen explained that current regulations already include AI to a large extent. The invited stakeholders, Ladina Caduff (Microsoft) and Estelle Pannatier (AlgorithmWatch), agreed that while existing legal frameworks require significant changes, the areas in which these are made must be clarified. Susanne Kuster (Federal Office of Justice) was also of the opinion that introducing specific legislation for new technology would not be in line with Swiss tradition as regulations are usually neutral regarding technology.

From left to right: Abraham Bernstein, Mathis Brauchbar, Markus Christen, Daniel Kettiger, Estelle Pannatier, Ladina Caduff, Stefan Wiprächtiger © advocacy

Stefan Wiprächtiger (Swiss Judges’ Association) pointed out that AI also poses a challenge for society. He feels that his work as a district court judge will in future involve teams using AI, whether for anonymising decisions or processing files, although he believes this transformation will take time.