Interview: AI, Judicial Decisions and Bias

17-08-2021

Cara Warmuth visited the Faculty of Law at the University of Graz as a “Land Steiermark Fellow” in July 2021. The CIHG took the opportunity to ask her about her recent project at the Institute for Legal Informatics at the University of Hannover, where she is currently focusing on Judicial Decision Making and how legislative reasoning should consider behavioral data. Read why she thinks that AI needs to be supported by psychological studies, which role it will play in reaching verdicts, and her take on interdisciplinarity.

 

Cara, you are an academic researcher at the Institute for Legal Informatics at the University of Hannover, where you are working in an interdisciplinary project investigating bias and discrimination through Big Data and Algorithms. Could you tell us a bit about the project’s core topics? How do the concepts of artificial intelligence (AI), judicial decisions, and bias in decision-making come together?

The project is an interdisciplinary project, where we focus on bias within automated decisions. “Interdisciplinary” in this case means that we, the legal scholars, are working together with philosophers, computer scientists and engineers.  This is very exciting, as we are able to investigate AI and bias from differing disciplinary perspectives and can learn a lot through this approach.

An example of bias through AI is the granting of loans. Imagine you walk into a bank and ask for a loan. The decision regarding whether you are granted the loan is not made by the bank employee alone, but with the assistance of an algorithm. Statistically, women are granted loans less often than men, because decision support systems perceive them to be less creditworthy. Why the AI comes to this conclusion is unclear. What is clear, is that certain groups of people – among them women – are discriminated against through the AI’s decision. This can also be the case with hiring decisions or a court judgment. And this is exactly what we want to avoid, and the main focus of the project.

 

You are working at the intersection of law, psychology, and computer science, and you are also studying psychology yourself. To what extent do you see it necessary to include psychological studies into judicial decisions supported by AI?

I am fully convinced that psychological studies should be integrated into the development and consultation processes of AI. This would also involve asking whether AI-supported court rulings are accepted by the involved parties. If a citizen is not able to comprehend and respect an AI-supported decision, then this would be an interesting psychological result that we would have to consider.

In the US, for example, we have seen studies where patients perceive their doctors as being less competent when they are using AI to support their diagnosis. This matter of acceptance, when transferred to judicial rulings, would also mean that AI-supported decisions could lead to a higher number of appeals, which would not necessarily be beneficial for the legal system.

 

And regarding transparency? How should we deal with the fact that AI-supported decisions are not necessarily comprehensible?

The black box phenomenon in algorithmic decisions is always problematic. The citizen looking for a legal solution has a right to a comprehensible judgment. But if not even the judge is able to comprehend how the judgment was made, this would be a major obstacle. If algorithms should be used as decision-support tools by the courts, then the judge must be able to comprehend how those decisions were formed.

This also leads to a conflict between trade secrets and the required level of transparency. One solution might be to have certain experts involved in the judicial process that are able to comprehend the underlying processes of the algorithmic system, who could then explain it to the involved parties, if necessary.

 

Assuming a judge would be using an AI-driven support system, but they lack the technical expertise to comprehend its functions – how does this pair with judicial autonomy?

That is a very interesting question. But it is not limited to AI-driven support systems, as the same situation unfolds with specialist reports used to answer technical or other topic-specific questions. It is very unusual to have a judge with full technical comprehension of any given circumstances. They are therefore always permitted to rely on expert knowledge, for example when dealing with accidents. For me the same would hold true for AI-driven support systems. Although I think it would be important to have someone with the same level of authority who knows how the underlying algorithm works.

 

Would this person then be received as an expert or as a proxy for the AI? Would you understand the AI’s decision as a specialist report, but you have this additional person who could explain how the AI came to its conclusion?

Yes, this could be a way to perceive it. Nevertheless, I would not see the AI’s decision equal to a specialist report. I just wanted to use it as an example to show that there is nothing wrong with the judge not having all the technical knowledge to understand the AI’s functioning. We are not expecting a judge to perform complex physics equations, for example, when reconstructing an accident, although they will still be the one reaching a verdict.

What we should rather be asking is why we are so eager to use AI in judicial decision-making processes. Because it is fancy? Because we are capable of doing so?

In my opinion there are two possible reasons. Firstly, we could say, that it is more economical. There are various processes where a judge could be saving a lot of time, if there are huge amounts of data that they would have to sift through themselves or standardized procedures like the diesel exhaust cases that could be outsourced to an AI.

The second argument would be that an AI might increase the chances for a just procedure, because it is capable of reaching objective and neutral decisions. But we should be looking very closely here, especially considering “bias”. No one can guarantee that an AI is reaching better or unbiased decisions. Its training data is produced by humans. The system is learning through old data, based on past human decisions, and this means that the underlying data is inevitably biased.

 

So this means, best case, you could get the mean value of biased decisions?

Yes, exactly.

 

If we imagine where this could be applied – how do you see the use of AI in evidence evaluation? Do you think AI could be used here?

I think that this question will be of relevance in decades to come, as one of the last steps that we will have to think about. Currently this task is too complex to be handed over to an AI. My understanding is that the judge stays where they are. They have their core tasks, namely reaching verdicts and assessing facts, but in doing so they will be strongly supported by AI systems.

 

It seems as we would drift off into fortune-telling, so I would rather like to close with one question. You are working within a very interdisciplinary environment. Where do you see the biggest difficulties, and what have you learned in terms of interdisciplinary cooperation?

The biggest difficulty for me is the reciprocal understanding. Of course, we are all scientists, but being able to make yourself understood, with your specific disciplinary language, that is difficult.

Sometimes you find yourself talking about the same term, for example “fairness” in algorithms. But a statistician has a completely different understanding of the term “fairness” than a legal scholar. I think it is important to tap into this at a very early stage. We have to talk about the special terms and reach an agreement about the language that each discipline is using. Especially when we come from disciplines that have almost no common ground, such as computer science and legal studies, there are big distances to overcome. That really is the main difficulty.

But once you cross this distance, it can be very rewarding. Then you can really make progress in finding solutions that you would not have been able to find within the borders of your own discipline.

 

Then I suggest we end with this plug for interdisciplinarity. Thank you very much!

 

The interview was conducted by our interns Gvantsa Kapanadze and Mag. Caroline Elisabeth Müller.

Download the interview as PDF.