Credit: Zapp2Photo/Shutterstock.com

Though robots may not be arguing cases in court any time soon, AI is already having a strong influence on the judicial system in the United States and further afield, according to a recent panel discussion hosted by the AI Alliance of Silicon Valley.

Legal leaders from both sides of the Pacific came together June 29 in Half Moon Bay, to discuss artificial intelligence in law at the US-China AI Tech Summit.

Panelists, some of whom were attorneys, noted that in both China and the United States, AI is already changing justice and due process in complicated ways.

“China and the U.S., we share similar challenges … especially for the transformation issues and [ensuring] fairness, and human dignity,” said Jiyu Zhang, an associate professor, IP Academy at Renmin University of China.

In China, the number of judges has decreased while the number of cases has increased annually, noted panelist Jia Gu, the chief researcher at the HuaYuYuanDian Law Research Institute. Gu said that, “with less power and more cases,” the Supreme People’s Court, the country’s highest trial court, is willing to seek help in the form of technology.

“We always say justice delayed is justice denied,” Zhang said. “People see [that] using artificial intelligence technology will [help] promote efficiency.” 

But using artificial intelligence to speed up legal cases can get complicated, in the United States and elsewhere—which Microsoft Corp. assistant general counsel Mike Philips said is at the forefront of his team’s mind.

“We think a lot about the fundamental impact of AI … on some real fundamental concepts underlying the law and a society founded upon the rule of law,” Philips said. ”Certainly [we're] thinking very carefully about what the impact [of] these systems might be on due process, equality under the law.”

Lee Tiedrich, partner and co-chair of the AI initiative at Covington & Burling recalled Loomis v. Wisconsin. This petition to the U.S. Supreme Court was an attempt to overturn a Wisconsin Supreme Court’s decision that sent defendant Eric Loomis to six years in prison. Loomis’ lawyers argued that the court’s use of AI risk assessment tools in its judgment violated Loomis’ due process rights.

Though the Supreme Court declined to hear the case, it did raise essential questions around the ethics of AI and due process.

Tiedrich noted that AI in the justice system also implicates questions of transparency and intellectual property. In cases where it’s alleged that AI perpetuates hiring discrimination via a biased algorithm, for example, the maker of the intelligence tool may be asked to disclose an algorithm, Tiedrich said. And if that algorithm is a trade secret, IP problems could arise.

The key takeaway from the U.S. perspective is, as technology continues to advance and move forward, it’s important for [the AI] community as a whole to say, what are the legal issues impacted by AI, [including] security liability, and come up with [a] sense of the rules and guidelines that properly balance all of them,” Tiedrich said.