Artificial Intelligence (AI) and Mediation: Technology-Based Versus Human-Facilitated Dispute Resolution
The use of AI in mediation has the potential to revolutionize the way disputes are resolved. AI-based mediators, also known as “virtual mediators” or “digital mediators”, use advanced algorithms and machine learning techniques to assist parties in reaching a resolution to their dispute.
There is variety to the approaches employed by AI mediation platforms. For example, one may use a rule-based system, where a set of predefined rules (laws and regulations) are used to guide the AI in its decision-making process. Another approach is to use a machine learning-based system, where the AI is trained on a dataset of past mediation cases and uses this training to inform its decision-making process.
In its current iteration, AI-assisted mediation should not be seen as a replacement for human mediators, but rather as a complementary tool that can assist in the mediation process. It does not make human mediators redundant. Rather, AI-assisted mediation should be viewed as a tool that can help the mediator provide the parties with objective analysis and recommendations for potential settlement options. However, as with any new technology, the use of AI in mediation raises a number of practical, ethical, and legal implications that must be considered.
In the most basic terms, the mechanics of AI-based mediation are not terribly different from the traditional mediation. The parties provide information to a human neutral who uses his/her knowledge, experience, and skills (i.e., “programming”) to assist the parties in identifying a fair resolution of their dispute. AI platforms using natural language processing (NLP), along with machine learning algorithms to automate the dispute resolution process, look similar:
- The parties involved in the dispute provide information about the case, including their arguments, evidence, and relevant laws or regulations. This information can be input into the AI platform in various formats, such as text, audio, or video. The AI platform can also eliminate challenges associated with any language barriers.
- The AI platform uses natural language processing (NLP) and machine learning algorithms to analyze the information provided by the parties and identify key issues strengths, and weaknesses in their arguments. It can also compare the case with similar past cases and identify relevant laws, regulations, and precedents.
- Based on its analysis, the AI platform generates a resolution suggestion that is fair, based upon the parties’ input, and consistent with relevant laws and regulations. The suggestion can take the form of a summary of key findings, an evaluation of the parties’ positions, or a proposed settlement. It could also be asked to generate a list of potential solutions or options for resolving a conflict. This could be especially helpful if the parties involved in the conflict are struggling to come up with ideas on their own.
- Alternatively, or in addition, the AI platform may be used to facilitate communication between parties in a conflict by providing a neutral third party to listen to both sides and generate responses that encourage productive dialogue.
- A human mediator with the benefit of emotional intelligence and an ability to grasp the nuances of the case and think of resolutions that might incorporate any subjective considerations motivating one or both parties can then adjust the suggested resolutions.
Once the parties agree on a resolution, the outcome and relevant information from the case can also be used by the AI platform to continually improve its algorithms and decision-making process.
AI-based mediators can process large amounts of data quickly and efficiently, which can result in a faster and more efficient mediation process. This increased efficiency is a key advantage to AI-based mediation. AI algorithms can analyze large amounts of data and provide valuable insights into the strengths and weaknesses of each party’s position, which can help to inform the negotiation process. In addition, AI-powered chatbots can help to facilitate communication between parties in a dispute and can also assist in the negotiation process by providing information and suggestions for potential settlement options.
The natural by-product of efficiency is the potential for a corresponding cost savings. AI-based mediation can lead to faster resolution times and lower costs for the parties involved. Additionally, AI-based mediators can be used to automate certain aspects of the mediation process, such as document management and scheduling, which can further reduce costs.
Another benefit of AI-assisted mediation is that it can provide a more objective and impartial analysis of the dispute and the parties’ positions. Because AI systems are not influenced by emotions or personal biases, they are less likely to make decisions based on subjective factors. AI-based mediators can be programmed to account for relevant legal principles and regulations, analyze large amounts of data, including historical case data, and to identify patterns and trends which can further improve the accuracy and fairness of the decisions.
The use of AI in direct communications between the parties may play a role in deescalating conflict, or at least in avoiding language that will escalate tensions. The absence of tone and body language in a written message can unintentionally lead to a miscommunication of intent. Imagine an algorithm that can detect language and syntax signaling hostility and a resistance to engaging in a compromised resolution and provide alternative messaging before it reaches the other side. The drafter can then elect to keep the original message or tweak it if the original message contains language that might lead the reader to the wrong understanding.
The accuracy and “fairness” of the AI platform is only as good as the data it is fed. An AI program would be limited to the information that has been provided by both the programmer and the party. As is true in pretty much every mediation, parties tend to hold certain cards close to the vest to bolster the strengths of their cases while minimizing aspects that challenge their perceptions of the probability of a best-case scenario outcome. Also, it is all but impossible to account for the potential biases of the programmer in selecting the data to be fed to the platform. It would be interesting to consider whether certain platforms will become known as plaintiff- versus defendant-friendly.
Moreover, while AI-based mediators can be programmed to identify patterns and make predictions, they lack the emotional intelligence of human mediators. Human mediators can read and interpret nonverbal cues and nuances, which can be important in understanding the parties’ perspectives and building trust. Human mediators bring empathy and an ability to respond to emotions that cannot be replicated by technology. Often, the successful resolution of a dispute might hinge on one or both parties feeling that they have been “heard” and understood before they will trust an outcome that they may otherwise perceive as failing to account for feelings associated with the underlying conflict.
Likewise, human mediators offer a level of flexibility and adaptability that is not possible with technology. Human mediators can refine their approach in real time to meet the specific needs of each case, which can result in a more tailored and effective mediation process. A human mediator will have the capacity to craft creative solutions to a dispute that account for the value of parties’ subjective motivations. We have all participated in mediations in which a non-monetary factor plays into the successful resolution of a dispute. Creative solutions can transcend the absolute.
The use of AI-based mediators may raise privacy and data protection concerns. For example, if an AI-based mediator collects personal data from the parties in a mediation, how is that data protected and who has access to it? To address these concerns, developers of AI-based mediators should be required to comply with relevant data protection laws and regulations and to provide clear and transparent information about how personal data is collected, used, and protected.
Another legal implication of using AI-based mediators is the admissibility of evidence that is generated by the system. For example, if an AI-based mediator generates a document or record that is used in a mediation, there may be some ambiguity as to whether that document or record could (or should) be admissible in court.
The use of AI-based mediators may also raise questions about the professional responsibility of lawyers and other legal professionals who use the system. For example, whether a lawyer have a duty to ensure that the system is functioning correctly, and that the advice is accurate.
Likewise, a key legal implication of using AI-based mediators is the question of liability and/or accountability in the event of a mistake. For example, if an AI-based mediator incorrectly advises a party in a mediation, who is liable for the mistake? Is it the developer of the AI-based mediator, the party that implemented the system, or the party that relied on the advice? These questions have yet to be fully answered and may vary depending on the jurisdiction. However, one potential solution is to require developers of AI-based mediators to provide indemnification and insurance to cover any potential liabilities.
The use of AI-based mediators also raises a number of ethical considerations that have yet to be fully resolved. As the use of AI-based mediators becomes more prevalent, it will be important for legal professionals to stay informed about these ethical considerations and to consider them when deciding whether to use an AI-based or human mediator. It is also important that the development and implementation of AI-based mediators consider these ethical considerations and include measures to mitigate any potential negative impacts on the parties involved.
Bias can be introduced into the decision-making process through the data that is used to train the system or through the design of the algorithm. This raises questions about whether the AI-based mediator is treating the parties fairly and whether the decision it reaches is impartial.
Another ethical consideration of using AI-based mediators is the lack of transparency. AI-based systems often rely on complex algorithms that are not easily understood by the parties or the legal professionals involved in the mediation. This raises questions about the transparency of the decision-making process and whether the parties have a clear understanding of how the AI-based mediator arrived at its decision.
In traditional mediation, the parties have the autonomy to make their own decisions and to reach a resolution that is acceptable to them. However, when using an AI-based mediator, the autonomy of the parties may be limited by the system’s decision-making process. This raises questions about the extent to which the parties are truly making their own decisions and whether the AI-based mediator is respecting the parties’ autonomy.
AI-based and human mediators each have their own strengths and weaknesses. AI-based mediators can be more efficient and cost-effective, but at the end of the day, there is no substitute for human mediators who have the ability to adapt their approach to the specific needs of each case and bring emotional intelligence to the process. In addition, the use of AI-based mediators raises a number of legal implications that have yet to be fully resolved. These include questions about liability, admissibility of evidence, privacy and data protection, professional responsibility, and ethical implications.
ABOUT AUDREY K. BERLAND
Audrey K. Berland is a mediator who has over 25 years of litigation experience. She has handled a wide variety of commercial and tort litigation matters for both defendants and plaintiffs through all phases of litigation, representing a broad range of clients in complex and multiparty matters, from Fortune 500 companies to small businesses and individuals. Consequently, the kinds of litigation Audrey has handled is similarly varied: claims involving product and pharmaceutical liability, environmental insurance coverage, construction defects, medical malpractice, and employment. She is able to help parties move beyond reactionary positions toward informed evaluations of their resolution options. She also serves as a Special Master.