Can AI Mediate? What Research Using AI in Therapy Tells Us.

Recent research out of the mental health space has sparked a fascinating question for the mediation community: 

If AI can deliver effective therapy, could it also mediate conflict? 

A recent article in MIT Technology Review reported on research that found that a generative AI model—carefully trained on cognitive behavioral therapy (CBT) literature—helped participants manage symptoms of depression, anxiety, and eating disorders to a nearly equivalent degree as human-delivered therapy, in less time. The trial reportedly marked the first clinical evidence based on a randomized trial that a large language model (LLM) can engage in emotionally supportive, structured conversation with positive therapeutic impact. 

ADR Notable has been closely watching developments in AI for how this new technology might impact the dispute resolution field. Initially cautious due to reports of flawed performance, we have been encouraged recently by improvements evidenced by applications like Prof. John Lande’s RPS  Negotiation and Mediation Coach (Lande RPS Coach) and research in parallel fields. 

First, How Did the Researchers Improve the LLM’s Reliability? 

GPT models like ChatGPT are first trained on a massive dataset pulled from the internet—books, articles, websites, blogs, conversations, movies, and more. This gives them a broad grasp of language, facts, and context. But everything is in there – fact, fiction and falsehoods, prejudices and affection – all jumbled together. This makes it challenging for standard LLM systems to generate consistent, accurate responses. Techniques have been developed to address this problem. 

The researchers reported that early versions of a therapy bot using only the basic training database might respond with simulated empathy, with responses joining the patient in expressions of depression. Trying to train the LLM on transcripts of actual therapy sessions resulted in a lot of ‘hmm’ or ‘go on’ responses. Or worse, a patient suggesting they wanted to lose weight might get support from the bot, without preliminary screening for a potential eating disorder. These were deemed unhelpful and potentially harmful. 

The authors note that many current online ‘therapy’ bots do not go much further than this. In mild cases of simple loneliness, for example, there is evidence bots can be helpful. But the researchers were clear that these, and even their carefully trained bot, are not substitutes for a human therapist in more complex cases.   

The CBT researchers went further, curating a supplemental training library with evidence-based materials from cognitive behavioral therapy, and training the bot in responses using best practices from CBT research. LLM output can be conditioned to be more relevant and accurate with a process called Retrieval Augmented Generation (RAG). RAG uses a specific subject-matter training library to modify the prompt going to — and the response from — the broader training library to improve the relevance and reliability of the LLM output. Prof. Lande’s RPS Coach is an example of this approach and can help trainers, mediators, and parties to prepare for and participate in mediation in much the same way an experienced colleague might.   

This therapy domain-specific, augmented GPT was then tested in a randomized trial with 210 therapy patient participants. 

The result?  Patients diagnosed with depression saw the best results, followed by those with general anxiety and those with a risk of eating disorders. In all, researchers found the bot achieved results similar to randomized application of human psychotherapy in less time and with impressive patient engagement. The carefully trained model could, in many cases, carry on a text-based therapeutic conversation that participants found helpful and engaging—and in some cases, even healing. 

While surprisingly positive results were achieved, the report came with some large caveats. For example, during the research a team was reviewing dialogues ready to intervene immediately if a patient expressed serious safety risks. That was not left to the bot. One of the researchers summed it up; “While these results are very promising, no generative AI agent is ready to operate fully autonomously in mental health where there is a very wide range of high-risk scenarios it might encounter.”  

What If GPT Were Trained to Mediate? 

It’s not hard to imagine applying the same approach to mediation. With the right data and training structure, a GPT model could learn to facilitate conflict resolution. Many are finding good results even from general database LLMs with prompts like: 

  • Reframe positional statements into interest-based language 
  • Identify opportunities for clarification between disputing parties 
  • Prompt parties to generate options for compromise 
  • Identify and summarize points of agreement 
  • Help parties reality-test their assumptions 

Such models have already been used very successfully in low-complexity, text-based environments—like small claims, customer service disputes, or workplace conflict resolution—where the risks are lower, contexts may be less complex, and the goal is simple agreement. ODR.com is one exceptionally strong example. 

But Mediation Isn’t Just Structured Conversation 

Mediation is deeply human work. It involves: 

  • Managing multi-party dynamics in real time 
  • Responding properly to both apparent and unspoken emotions 
  • Reading nonverbal cues like body language, facial expression, and tone 
  • Responding to power imbalances 
  • Using intuition to shift strategy moment-to-moment 
  • Building trust and rapport between people who may not want to be in the same room 

Even the most advanced AI can’t yet “feel” the room or navigate the relational depth that a skilled mediator brings. So the question today isn’t whether AI can replace mediators—it’s how it can meaningfully support them. 

What AI Can Do for Mediators—Right Now 

The immediate future of AI in mediation serves not as a replacement, but as an augmentation—a smart, behind-the-scenes assistant that helps professionals do what they do best. 

We can already see two pathways for AI. One is the simple assistant role that helps automate the practical administrative tasks. So called, “agentic” AI can already bring a degree of mimicry of human judgment to completing tasks—something more than robotic ‘if-then’ programming. The second pathway is more interesting. With the right training, an AI bot can be a co-mediator, challenging the practitioner to consider carefully questions about the process, parties, positions and possible resolutions. This process is still ultimately dependent on the judgment of the mediator. But it can be enhanced by an AI bot with the right questions or prompts in the role of coach, collaborator, or co-mediator with remarkable insight to a specific case. And chatbots can even have an imperturbable, cheerful “personality.” 

All of this can save time, increase accessibility, and improve user experience—without losing the human heart of the process. 

Final Thought: AI Is a Tool, Not a Replacement 

The success of AI in therapeutic settings suggests we’re on the cusp of something big: machines that can participate in emotionally intelligent dialogue. For mediators, that’s both an opportunity and a challenge. 

AI can help scale access to justice and streamline low-conflict interactions. But it can’t replace the core elements of mediation: presence, empathy, nuance, and wisdom. 

At ADR Notable, we’re committed to building tools that support mediators—not supplant them. Our goal is to help you do your work more efficiently, ethically, and impactfully. 

Want to Learn More?

Stay ahead of the curve with insights into AI, mediation tech, and the future of dispute resolution.
👉 Subscribe to our newsletter
👉 Book a demo of our AI-supported mediation tools
👉 Contact us to collaborate

Related Posts