ADVERTISEMENT

Do Androids Dream of Biased Judges? Self-fulfilling Prophecies in AI Litigation Prediction

Inside View

Dr. János Vajda, Attorney at Law and cognitive psychologist, Partner, Szecskay Attorneys at Law

It is not exactly breaking news for anyone that technology is rapidly changing the legal profession. Although, lately, it has been all about chatGPT and other generative AI tools, another exciting domain of artificial legal intelligence is quantitative legal prediction (QLP) which is also gaining increasing traction in the market.

QLP uses the dataset of previous litigation cases to learn the correlations between case features and target outcomes. The underlying variables are manifold: who the judge in the case is; the type of legal arguments used; which precedents are cited by a particular judge; the weight of evidence attributed to specific pieces of evidence by the judge, even whether the judge is prone to certain cognitive biases (such as the hindsight bias, the conjunction fallacy, the anchoring effect, framing bias, and confirmation bias). The legal counsel can then use the result of this calculation to make predictions concerning the case and make strategic decisions within the adjudication process.

As a result, QLP can identify the personal tendencies of particular judges that would otherwise be invisible to lawyers. Nevertheless, there is growing concern that AI systems learn and exaggerate human cognitive biases.

One way bias can creep into algorithms is when AI systems learn to make data-based decisions, including biased human choices. If the algorithm is based on biased human decisions, then inevitably, the AI will also reflect such biases, and most likely, it will even exaggerate these biases by holding them true for its future decisions and our outcome predictions.

Even though our decisions based on the AI’s conclusions seem reasonable, they might be profoundly flawed if we put them in a broader perspective. Assuming that our super recognizers’ conclusion is on point and has pinned a judge’s bias right, and we decide not to fight for our case and indirectly against this bias, we have just put one more brick into the wall of the justification mechanism. Worse yet, when we (or someone else) find ourselves in a similar decisive situation with the same judge, the biased pattern of the judge will be even more visible to our super-recognizers, and they will assign more significant weight to this variable when computing next time. Thus, ultimately, we will find ourselves in a never-ending cycle of our AI reinforcing the given bias of the judge.

When we buy into the idea that our case has already been decided by a biased judge, disregarding our legal arguments, we profoundly compromise our right to participate in the adjudication process. If we make decisions in this spirit, our super-recognizer’s predictions will become self-fulfilling prophecies. The AI’s expectation will lead to its confirmation.

We must also remind ourselves that even though the algorithms can outline significant patterns specific to judges, they don’t necessarily refer to existing biases. Based on the previous, one should not overestimate the reported results of the algorithms.

Another concern is that applying QLP might undermine the fairness of the litigation process. This unfairness might stem from violating the principle of equality of arms. If predictive justice becomes increasingly influential, unequal access to these tools will enhance the advantage that the wealthier and more powerful litigation parties have over those who cannot access QLP tools.

It is noteworthy that, for the above reasons, the French legislature introduced a controversial rule in 2019 that bans the use of QLP for outlining a judge’s patterns and predicting case outcomes based on the prior behavior of the judge. Although the somewhat harsh legislation caused a severe backlash and was criticized by many scholars and free speech organizations, it can, in some ways, be considered an extension of the European concerns around the ethical use of AI. Hopefully, the soon-to-be-adopted and much-awaited AI Act of the EU will clarify the European rule maker’s stance on this issue.

This article was first published in the Budapest Business Journal print issue of June 2, 2023.

Achieving Deficit Targets 'Crucial' - Rate-setters MNB

Achieving Deficit Targets 'Crucial' - Rate-setters

Orbán Welcomes Xi to Hungary Visits

Orbán Welcomes Xi to Hungary

The Makings of the Future Smart Electricity Network Innovation

The Makings of the Future Smart Electricity Network

Aldi Opening Biggest Store in Budapest Food

Aldi Opening Biggest Store in Budapest

SUPPORT THE BUDAPEST BUSINESS JOURNAL

Producing journalism that is worthy of the name is a costly business. For 27 years, the publishers, editors and reporters of the Budapest Business Journal have striven to bring you business news that works, information that you can trust, that is factual, accurate and presented without fear or favor.
Newspaper organizations across the globe have struggled to find a business model that allows them to continue to excel, without compromising their ability to perform. Most recently, some have experimented with the idea of involving their most important stakeholders, their readers.
We would like to offer that same opportunity to our readers. We would like to invite you to help us deliver the quality business journalism you require. Hit our Support the BBJ button and you can choose the how much and how often you send us your contributions.