A Stone in Still Water: The Growing Debate over AI in Dispute Resolution

Just recently, an email thread among legal academics landed in my inbox with the force of a well-thrown stone in still water. The subject: artificial intelligence and its role—present and future—in alternative dispute resolution (ADR). The exchanges were thoughtful, sometimes pointed, and remarkably revealing. They made one thing clear: our field is changing fast, and there’s no consensus on what that change means.

The conversation began with comparisons to familiar technologies—word processing, email, the internet, Zoom. The message was that AI is just the next step forward, and resisting it risks obsolescence. But that sense of inevitability was quickly met with skepticism. Some warned that AI’s most enthusiastic adopters may be overlooking fundamental risks: errors, bias, erosion of human judgment, breaches of confidentiality, and the growing sense that we may be outsourcing trust itself. Still others proposed creative ways to integrate and manage AI responsibly.

What followed was an unspooling of shared concerns, diverging views, and previews of numerous scholarly articles and a forthcoming book.  Some of this material is publicly available at the blog site Indisputably.org. As these materials are formally published, I’ll cover them in more depth.

Understanding the Technology: Why Generative AI Has Become the Flashpoint

To understand the friction around AI in dispute resolution, it’s helpful to draw some distinctions.

Expert systems—rule-based programs—have been used for some time in legal and ADR settings. They follow pre-programmed logic and are transparent but limited to what they’re told.
Machine learning systems are more powerful. They detect patterns in vast datasets and make probabilistic predictions—useful in outcome forecasting and document analysis, but often opaque in how they reach conclusions.

Generative AI—tools like ChatGPT and Claude—sits atop this pyramid. These models are trained on enormous volumes of text and can generate plausible, coherent output: mediation summaries, legal briefs, simulated arguments, proposed resolutions. But they operate on probability, not understanding. They don’t reason in any human sense. They predict the next most likely word or sentence based on patterns, not on comprehension, empathy, or ethics.

And that’s where the current debate sharpens. These tools sound human. They speak our language, sometimes better than we do. And in a field like ADR—where tone, timing, and trust are everything—that’s not a small thing.

Benefits: Efficiency, Insight, Access, and Preparation

AI streamlines repetitive and time-consuming tasks—such as legal research, contract analysis, deposition and document summaries, document drafting, translation, checklists—saving professionals valuable time and reducing client costs. In doing so, AI can organize and extract key facts, compare party submissions, and generate case timelines. It allows faster orientation and more focused preparation—especially in high-volume or document-heavy cases. With the help of natural language processing, it can flag relevant issues and identify patterns buried in complex records.

Machine learning and legal analytics platforms can assess trends in judge rulings, case outcomes, and party behavior. These tools give attorneys a clearer view of likely outcomes, helping guide clients, shape negotiation strategies, and identify when ADR may be a more cost-effective path.

AI-powered ODR systems can help triage disputes, offer guidance to unrepresented litigants, and provide multilingual support. This expands access to justice by reducing reliance on gatekeepers and lowering procedural barriers.

AI tools can simulate alternative settlement options, suggest negotiation trade-offs, and identify shared interests based on party input. This supports mediators in crafting solutions beyond what parties may initially consider.

AI can assist in designing role-playing scenarios, building checklists, and offering prompts for issue-spotting, helping neutrals and attorneys enhance their skills and stay current with evolving practice challenges.

Risks: Errors, Bias, Deskilling, and Dehumanization

Generative AI can fabricate content that looks credible but isn’t. In a legal or dispute resolution setting, those hallucinations can mislead or confuse parties and professionals alike.

These tools are trained on vast datasets—often scraped from a history that contains inequities. Without careful design and oversight, they can replicate or even amplify those inequities, nudging users toward outcomes rooted more in statistical averages than human fairness.

There’s growing concern that as we automate more of the work, we erode essential human capacities: judgment, persuasion, empathy. The risk isn’t just that AI can’t do what humans do—but that humans may stop doing it, relying instead on the machine.

Tools like ChatGPT were not built for legal confidentiality. Submitting sensitive case information to these systems—especially without knowing how or where data is stored—can create ethical and professional landmines.

Perhaps most concerning, AI can create the illusion of understanding without its substance. A tool that sounds empathetic but lacks any capacity for empathy risks misleading users and undermining the moral and relational core of dispute resolution. When parties feel like they’re being “processed” by an algorithm rather than heard by a human, trust breaks down.

In court-connected ADR and beyond, public legitimacy depends on the perception of fairness. If resolution appears driven by inscrutable algorithms rather than principled judgment, we risk diminishing confidence not just in ADR, but in the broader system it supports.

Ethics and Regulation: The Framework Is Still Emerging

Right now, guidance is limited. The ABA’s Formal Opinion 512 encourages lawyers to be aware of AI tools and to assess their use in the context of Model Rules 1.1 (competence), 1.6 (confidentiality), 1.4 (communications), and 1.5 (fees). 

I highly recommend reviewing Formal Opinion 512, if you haven’t already. 

Formal Opinion 512 is a start, but it remains relatively conceptual, is subject to acceptance by state bar associations, and governs only licensed attorneys.

Various ADR platform providers, such as the American Arbitration Association, are embracing AI in different and evolving ways, and in these circumstances the market will accept (or reject) various approaches.

Some academics are proposing more comprehensive approaches, involving classes of paraprofessionals or a split bar where a specialized segment of the legal profession embraces generative AI.  

Could the challenges of AI cause states to finally begin regulating mediators and arbitrators?

All of this leaves mediators, arbitrators, and advocates to navigate uncertain terrain, making decisions about tools that few of us fully understand.

Final Reflections

Reading through the emails and the linked papers, I kept coming back to a simple image: AI, quietly entering the mediation room. Not sitting at the table. Not making decisions. But present. Taking notes. Whispering suggestions.

For now, it’s a presence. But soon, we’ll have to decide what kind of voice it should have—how loud, how independent, how trusted. For all of AI’s promise, I keep thinking about the moments I’ve seen in mediation when progress hinged on something ineffable: a shared glance, a pause, an acknowledgment of pain or humanity. Those moments resolve cases. They build trust. They are—and remain—irreplaceably human.

Next
Next

How Do You Avoid Being Pulled Under When Mediating With High Conflict Personalities?