Containing the risks of new technologies without hampering innovation

In light of the EU AI Act, University of Freiburg professors Rolf Backofen (Computer Science), Oliver Müller (Philosophy) and Silja Voeneky (Law) discuss the adaptive regulation of new technologies

The EU AI Act is expected to come into force in 2024. It differentiates between various artificial intelligence (AI) systems according to their risk to human rights of citizens in the European Union (EU). AI applications with an unacceptable risk will not be allowed to be used or sold in the EU, while those with a high risk would be regulated.

“The appropriate regulation of AI is one of the major challenges of our time in the field of emerging technologies,” says legal scholar Prof. Dr Silja Voeneky. She welcomes the fact that the EU is going forward, but notes that such regulation must be “proportionate and also adaptive,” something that has not yet been sufficiently taken into account in the Draft EU AI Act. Voeneky, philosopher Prof. Dr Oliver Müller and bioinformatics specialist Prof. Dr Rolf Backofen focus their joint interdisciplinary research on the concept of adaptivity, i.e. the ability to adapt quickly to the opportunities and risks of new technologies.

When it comes to regulation, says Müller, a variety of legal and ethical framework conditions are relevant, from aspects of privacy protection to the question of “the extent to which chatbots reproduce social prejudices.” In view of this complexity and rapidly developing technologies, regulation is still lagging behind.

AI as a key tool for adaptive regulation

In terms of adaptivity, AI itself could be an important tool, says Backofen: “With a very system evolving as quickly as chatbots, you need regulating AI that can react directly to changes.” According to Müller, the speed at which new technologies are evaluated could also be increased through “internal coherence,” meaning “that we don’t have to start from scratch for every new technology.” Although AI has been very present in public discourse since the market launch of the chatbot ChatGPT, it is only one of the emerging technologies alongside others such as green genetic engineering and gene therapy.

Containing risks without hampering innovation

Regulation is not about preventing innovation. Rather, according to Backofen, “we need adaptive regulations so that we can utilise the advantages of the technologies well.” According to Vöneky, regulation is helpful when it “quickly contains risks on the one hand and does not hamper innovation potential on the other.” She adds that an interdisciplinary approach is crucial for such appropriate regulation. “We can't meaningfully discuss standards and laws if we don’t understand the technologies.”

The responsibility of university research for social discourse

According to Voeneky, one of the strengths of the University of Freiburg lies in this kind of highly interdisciplinary collaboration. In addition, a “more neutral view can only be achieved by researchers who have no direct interest in selling a product. It is our task to create an awareness about opportunities and risks so that social discourse can be based on sound information.” At the same time, there is a “tension between expectations from a legal and ethical perspective regarding standards and values on the one hand and public perception on the other, and these can certainly differ,” adds Müller. It remains to be seen how the EU AI law will be received by society, but it will come into force at a time when, in Silja Voeneky's perception, the population is “becoming more aware that AI is not just a nice app on your smartphone, but can also be linked to major disruptive risks.”

Rolf Backofen, Oliver Müller and Silja Voeneky are available for media requests.

Together, they are spokespersons for the Cluster of Excellence initiative Adaptive Futures. Further information on this and the overall Freiburg Excellence Strategy as a whole can be found here:

Link to original press release and video interviews with Rolf Backofen and Silja Vöneky: