“In critical areas, AI does not work today”

Artificial intelligence is said to have huge potential. It nurtures hopes, but also stirs up fears, for example when used in medicine or lethal autonomous weapon systems. Where do we stand today and what might the future hold?

It sounds revolutionary: from algorithms that detect cancer to models that predict the next natural disaster to self-driving vehicles. At the same time, many people fear that algorithms and AI systems could become too powerful. Widespread concerns are that AI could incite discrimination and make fatal mistakes that would never happen to a human being.

The Algorithm Accountability Lab, which is headed by Professor Katharina Zweig at the Technical University in Kaiserslautern, is investigating algorithmic decision-making systems for their quality and fairness. Zweig has a clear opinion on when artificial intelligence should be used and when it is better not to.

For you AI is as revolutionary as the invention of letter printing. Where will we feel its effects most strongly?

That's hard to say. Unlike before, computers are now not only able to understand structured data but can also decode words and images. That is the real game changer. When it comes to the question of what we can do with it, we are at the very beginning. AI could be the technology that enables self-driving cars or more individualised therapy with effective and well-tolerated medicines. In other areas, such as human resource development, I am a little more critical.

When it comes to the question of what we can do with AI, we are at the very beginning

Why?

Because AI is nothing more than a tool. Today's algorithmic decision-making systems often make decisions that no one questions, based on historical data. But it is quite crucial that a human being looks at the results of the AI and checks that the causal conclusions the AI claims to have drawn does actually exist. One thing is quite clear: in critical areas such as evaluating or even predicting human behaviour, AI does not work today.

AI is nothing more than a tool (…). It is quite crucial that a human being looks at the results of the AI

How can people ensure that AI really works for their benefit, then?

There are clear criteria. First, there needs to be an evaluation process in which experts check the quality of algorithmic decisions and make changes if necessary. Secondly, fundamental philosophical questions must be taken into account. Lethal autonomous weapon systems, so-called killer robots, violate the presumption of innocence. If such fundamental principles of law are violated, AI should not be used, even if the decisions were 99.99 percent correct - which they are not.

Many people fear that algorithms could incite discrimination. How justified do you think this concern is?

It is definitely justified. Human behaviour is too complex to be left unquestioned to mathematical models. In some courtrooms in the US, for example, software is now used to predict whether criminals will reoffend. The success rate in predicting violent crime is low. But I think using AI here is also really problematic for fundamental reasons. Successfully predicting people's behaviour is beyond the current state of research.

In some courtrooms in the US software is now used to predict whether criminals will reoffend

However, some companies hope to make a lot of profit from it.

At the moment there is a lot of discussion about using AI to categorise, evaluate and predict human behaviour. That costs a lot of money and the machines are not necessarily good at it. I think it would make much more sense if companies used the technology more in production. There is still a lot of room for improvement there.

How do you look after your own data on the web?

I check almost every cookie policy and give minimal access rights. And my family doesn’t have a voice assistant in the house. The biggest danger I see is that you can now put fake news into people's mouths. Even with relatively little footage, a person's voice and facial expressions can be imitated amazingly well. In the next few years, we will see many fraud attempts using AI.

About Katharina Zweig

Katharina Zweig is Professor of Computer Science at the Technical University of Kaiserslautern, Germany. She is also an AI consultant, advising employees, works councils and politicians on the topic of artificial intelligence.

Katharina Zweig has won several awards in the field of science communication, including the German Research Foundation (DFG) Communicator Award. And she has written a bestselling German-language book on algorithms (Ein Algorithmus hat kein Taktgefühl).

Her start-up “Trusted AI“ helps companies to buy trustworthy AI or develop it themselves.

Andrea Michels

… is fascinated by how quickly AI is changing our everyday lives. At the same time, she wonders which decisions we can leave to machines and where humans remain indispensable.

Recommended content

Digital Disruption | Opinion

“AI technology invariably needs human beings” “AI technology invariably needs human beings”

Simon Carter, Head of Deutsche Bank’s Data Innovation Group, describes how AI can enhance human work, rather than replacing it.

“AI technology invariably needs human beings” AI in the business world

Digital Disruption | Outlook

How Artificial Intelligence is changing banking How artificial intelligence is changing banking

AI is considered one of the technologies that can fundamentally change industries. Banking is no exception. We show three possibilities.

How Artificial Intelligence is changing banking Three short videos

Digital Disruption | Video Story

How AI helps autonomous driving achieve a breakthrough Autonomous driving – breakthrough thanks to AI?

Autonomous driving is a big hope for the future, but accidents still cause concern. The start-up Deep Safety wants to make autonomous vehicles safe.

How AI helps autonomous driving achieve a breakthrough A driving school for AI

What Next: our topics

Link to Responsible Growth
Link to Digital Disruption
Link to Entrepreneurial Success