AI-Infused Justice for All?

The jury is still out on how to best blend artificial intelligence and the law for the betterment of the judicial system. MSAI director Kristian Hammond sees the potential — and the dangers — of its adoption.

At first, there was nothing unusual about Roberto Mata’s 2023 lawsuit against Colombian airline company Avianca.

Mata claimed he was injured when a metal serving cart struck his knee while he was on a flight to New York. Avianca responded and asked a Manhattan judge to throw out the suit. Mata’s lawyers objected, submitting a 10-page brief citing numerous cases allegedly relevant to their client’s cause.

There was just one problem: None of the cases actually existed.

Mata’s attorneys, it was soon revealed, had used the generative artificial intelligence (GenAI) tool ChatGPT to write the brief. The large language model (LLM) made up the citations used to protest the suit’s dismissal, and no one on Mata’s legal team double-checked their authenticity.

Each attorney was fined $5,000 for the oversight and for first asserting they wrote the brief themselves.

Kristian Hammond has been watching intently as AI works its way further and further into the practice of law – and at situations like the Avianca lawsuit. As the director of Northwestern Engineering's Master of Science in Artificial Intelligence (MSAI) program, he sees the dangers in AI’s unchecked use. He also is captivated by the possibilities it creates to build a judicial system more in line with the original intent of law.

For example, one practice often depicted in film is for either the prosecution or defense to "bury the other side" in paperwork, meaning they deliver countless boxes of files for their opposition to weed through. Often the move is meant to hide pertinent information, with the hope that hours of time will be wasted trying to find the proverbial needle in the haystack.

In that example, AI can serve as a fantastic needle detector.

"Lawyers used to have associates go through scores of boxes of documents to read through them," Hammond said. "Machines can do the same kind of thing and find semantically relevant documents in a manner of seconds. AI can do all the work that junior associates used to do that took a tremendous amount of time and didn’t teach them anything."

While that is a tactical benefit, Hammond also believes there is a philosophical benefit to bringing AI into the legal realm.

“We want the world to be more just, and we want everyone to have access to that justice,” Hammond said. “We're now in a place where the technology itself might help us with fairness and access to justice — if we do it right.”

Hammond is weaving what it means to “do it right” into the MSAI curriculum. Partnerships are the key to successful and responsible integration of AI’s strengths into the practice of law, he said.

To that end, MSAI has partnered with Northwestern’s Pritzker School of Law, and in particular Daniel Linna, senior lecturer and director of law and technology initiatives. He holds both a law degree and a wealth of experience with technological integration.

The partnership led to the creation of an MSAI course held in conjunction with the law school. The class looks at legal problems from a technology perspective. Of late, AI has taken center stage.

Linna also works with MSAI leadership and students to discuss how the law is applied to AI and, conversely, how AI is going to affect the practice of law.

“That impact is increasing every day because of the nature of the technologies that we're using,” Hammond said. “We have opportunities for students to join in and become part of the process.”

That process needs to avoid creating a two-tiered judicial system — one tier with people who use AI-empowered machines and a second tier of those who do not. Hammond also cautioned against allowing AI-infused judicial decisions because of biases that can find their way into LLMs.

In some ways, AI’s move into law is surprising, Hammond said. After all, the profitability of the profession is based on the billable hour, and AI tends to make businesses more efficient. More efficiency equals fewer billable hours.

That said, the publicity behind LLMs such as ChatGPT made AI impossible for the profession to ignore.

Now, Hammond said he sees it as MSAI’s responsibility to prepare students to be leaders and ensure AI and the law mix ethically. It’s also the program's job to educate students on its potential dangers, such as those seen through Roberto Mata’s lawsuit against Avianca.

“We are all trying to figure out how to use this technology, how to use it safely and intelligently without making mistakes or having client data leak,” Hammond said. “All of these questions are about how you bring these technologies into the enterprise. We want to prepare our students for that.”

McCormick News Article