How artificial intelligence (AI) impacts alternative dispute resolution (ADR)[i] processes and the role of the neutral (e.g., third-party negotiator, mediator or arbitrator) depends, among other things, on the type of technology, its functions and purposes, and the opportunities for human oversight and intervention. It is helpful to think of AI in ADR (AIDR) as existing on a loose spectrum:

Assistive technologies can support, inform or make recommendations to neutrals. These technologies can help reduce the burden of high-volume, repetitive tasks (e.g., administrative and procedural requirements) and provide informational resources that support informed and accurate neutral decision-making, therefore encouraging ADR’s core objectives of providing disputants with a fair, efficient and economical resolution process.

Automative technologies, which can partially or fully automate discrete tasks, and in some cases even replace neutrals, can help facilitate or independently perform legal research; document preparation and analysis; case negotiation; settlement, award and resolution plan drafting; and decision-making functions. These technologies can forecast case outcomes to self-represented litigants, as well as autonomously resolve minor, relatively straightforward disputes, possibly freeing up neutrals to focus on more complex matters.

AIDR Risks and Challenges

Machine-learning-based AI systems’ transformational potential stems from their ability to derive rules from correlative patterns in data and then apply those rules to new data. However, laws and rules do not provide structure conducive to algorithms’ learning patterns and rules. Conflicts can involve multiple areas of law (e.g., tort, property, insurance, family) and disputants from different jurisdictions, which can complicate AI training. Complex and disputed fact sets are another feature of many cases, and no existing AI system can reliably measure human credibility. Further, human neutrals often rely on experience, knowledge and normative judgments to navigate subtle differences in context (e.g., “reasonable” behavior and “foreseeable” outcomes) and to deal with social and emotional issues.

Most AI systems cannot execute significant tasks without human oversight, although some can operate and produce predictions, recommendations or decisions in a manner that is not explainable or understandable to system users. The opacity of these “black box” AI systems makes it difficult to verify whether its outputs are valid and reliable, or if there are underlying biases or errors. Not being able to access or understand the basis of a decision undermines disputants’ rights to a reasoned decision.

For these reasons, some argue that automative technologies should never replace humans in dispute resolution and legal processes insofar as they lack human reasoning and common sense, and therefore cannot achieve meaningful fairness and justice.

The European Regulatory Landscape for AI and AIDR

The European Union Artificial Intelligence Act (EU AI Act) is expected to be voted on and formally adopted in March 2024. The provisional text was released in February 2024.

The EU AI Act classifies AI use cases according to their risk level. High-risk applications are subject to several requirements before systems can be released to the market: conformity assessments to demonstrate compliance with requirements for trustworthy AI (data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity and robustness) and the implementation of quality systems and risk management systems. Due to risks of error, bias and opacity, the act classifies the use of AI in the administration of justice and ADR as high risk where systems are “intended to be used by a judicial authority or on its behalf to assist judicial authorities in researching and interpreting facts and the law and in applying the law to a concrete set of facts.”

The European Commission for the Efficiency of Justice’s (CEPEJ) perspective on assistive versus automative technologies is consistent with the EU AI Act, emphasizing that final judicial decisions must remain “a human-driven activity and decision.” In December 2023, the CEPEJ adopted a set of guidelines for online dispute resolution (ODR) that reflect existing ADR standards and practice, including those articulated by the United Nations Commission on International Trade Law (UNCITRAL). Among other things, the guidelines state that the deployers of ODR and AIDR systems should adopt technical measures that comply with the latest standards for safety, fairness and efficiency; have sufficient knowledge of the technology being used, including its potential risks and negative impacts; ensure the effective participation of parties; and do not violate data protection laws.

How AI Rules Will Become ADR Rules

AI and ADR are regulated through rules with more general applicability, such as privacy and advertising practices. Rules that apply to ADR, such as conflict disclosures, also apply to AI used in ADR. The emerging body of rules for AI will likewise apply to ADR.

In 2020, the European Committee on Legal Affairs suggested that deployers are in control of AI system risks, and thus have liability for AI-generated harms. Under the EU AI Act, deployers of high-risk AI systems used in the public administration of justice must perform fundamental rights impact assessments to identify specific harms likely to impact persons or groups and prepare governance and mitigation arrangements, such as human oversight, complaint handling and redress procedures. Deployers who base a decision with legal effect on a high-risk AI system output also owe impacted parties “clear and meaningful explanations on the role of the AI system in the decision-making procedure and the main elements of the decision taken.” These requirements may make neutrals liable for harms caused by AI systems (e.g., using a system that operates with a systemic racial bias), encouraging greater attention to AIDR system design, deployment and governance.

Few mechanisms exist to discipline or hold human neutrals accountable for errors or biases in judgment. While practitioners can be held liable for racially motivated behavior, a human neutral will rarely admit to racial bias, and would more likely justify an award in a reasoned decision based on permissible criteria. Even a statistically significant pattern of conscious or unconscious racial bias in a neutral’s awards is unlikely to invalidate a particular award’s validity. In contrast, AI systems can be evaluated for statistical error or bias and reprogrammed or decommissioned if revealed to be producing inaccurate or invalid outputs. Impacted users can also receive explainability statements, which provide information related to system functioning, data use, fairness, safety and performance, helping mitigate concerns about system opacity.

If emerging rules hold AI systems to higher standards than human neutrals, such as enhanced transparency and explainability, then these rules may help address some of the long-standing needs regarding ADR governance.

Ryan Abbott is a neutral at JAMS in New York, Los Angeles and London. He has unique domestic and international expertise in the fields of life sciences, intellectual property, technology and health care. He is the author of the book The Reasonable Robot: Artificial Intelligence and the Law, which was published by Cambridge University Press in 2020. 

Brinson S. Elliott is a Client Team Leader at Cantellus Group, a boutique firm advising clients on the strategy, oversight and governance of AI and other frontier technologies. She works across the full range of Cantellus Group engagements and is particularly interested in the socio-technical, ethical and legal implications of emerging technologies and their governance. She is based in the San Francisco Bay Area. 

[i] Adapted from Abbott, Ryan, and Brinson S. Elliott. 2023. “Putting the Artificial Intelligence in Alternative Dispute Resolution – How AI Rules Will Become ADR Rules.” Amicus Curiae. The University of London School of Advanced Study. https://journals.sas.ac.uk/amicus/article/view/5627

Disclaimer:  The content is intended for general informational purposes only and should not be construed as legal advice.  If you require legal or professional advice, please contact an attorney.