Artificial Intelligence in Mediation: Risks and Opportunities in Dispute Resolution

Dr. Giovanna Del Bene | Thinx

Technology is radically transforming many areas of our daily lives, and the legal sector is no exception. Among the most promising innovations in this field stands out the application of artificial intelligence (AI) and predictive justice. AI is not only improving the efficiency of the legal system but also opening new opportunities for more accessible, fair, and fast justice. In this article, we will explore how artificial intelligence is helping to optimize mediation and predictive justice and discuss the potential implications for the future of law.

1. Artificial Intelligence in the Legal Sector and Predictive Justice: A Future of Informed Decisions

AI in the legal context refers to the use of advanced algorithms to analyze data, automate processes, and make informed decisions. In the legal sector, AI can be used for a wide range of applications, including legal document processing, case management, and predicting the outcomes of legal disputes. One of the most significant aspects of AI is its ability to analyze enormous volumes of legal data in very short times, improving the efficiency of lawyers, judges, and legal professionals in general. 

Predictive justice is a branch of legal technology that uses AI and has developed over time following three main directions: crime prevention and preliminary investigations; a “predictive” function, referring to the ability to foresee the outcome of legal proceedings based on the analysis of past cases; and finally, a “substitute” function of the judge, completely delegating the decision to an algorithm. (1)

It is a tool widespread both in the USA and in several European Union countries. The idea behind predictive justice is that the analysis of historical data, such as previous rulings, legal opinions, and behaviors of the parties involved can provide indications on the possible outcomes of a case. This approach not only helps lawyers and judges make more informed decisions but also allows the parties involved to realistically assess their chances of success in a lawsuit, improving transparency and reducing uncertainty. 

One of the most well-known tools in the legal field is the use of machine learning algorithms to analyze large amounts of data and provide predictions on the outcomes of disputes. Such systems can, for example, suggest legal strategies, predict opponents’ moves, and even offer compromise scenarios that could resolve a case without the need for a lengthy judicial process. However, there are also critical issues to consider: algorithms can perpetuate biases present in historical data, leading to unfair decisions. Furthermore, it is essential that AI decision-making processes are understandable, avoiding algorithmic opacity that could undermine trust in the judicial system. Finally, it is necessary to determine who will be responsible in case of AI errors and regulate this eventuality clearly. 

Machine learning is most widely used in Common Law countries, where legal systems rely on judicial precedents. In these systems, algorithms formulate decisions by analyzing past rulings. In contrast, Civil Law systems, which are based on written statutes and the principle of legality, may find such algorithms less appropriate. 

In the United States, predictive justice has become a growing phenomenon, especially thanks to the use of advanced algorithms to analyze vast amounts of legal data. Technology companies are developing tools that can predict the outcome of a lawsuit with a certain probability, analyzing legal precedents, legal arguments, and other relevant information. 

One of the most well-known tools is PredPol, which is used to predict crime in certain geographic areas, although its application is more oriented towards crime prevention rather than predicting trial outcomes. However, tools such as Lex Machina and CaseText focus directly on analyzing past legal decisions to anticipate the outcomes of similar cases. 

The use of predictive justice in the United States has raised discussions regarding algorithmic discrimination and the transparency of models. Critics argue that if algorithms are not properly designed or if the historical data used to train them contain racial or socio-economic biases, they risk perpetuating inequalities and discrimination. A striking example is the case of COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), an algorithm used in courts to assess the likelihood of recidivism among defendants, accused of being biased against minorities. The algorithm uses responses from a 137-question survey (concerning age, work, social life, education level, drug use, personal opinions, criminal record, and so on) to predict the risk of reoffending. The algorithm merely adopts, as a calculation criterion, a purely statistical and socio-economic factor, which is not relevant to the individual case, thus resulting in discriminatory treatment for certain ethnic groups compared to others. In this regard, the case of the Wisconsin Supreme Court vs. Eric L. Loomis (Case no. 2015AP157-CR, 5 April – 13 July 2016) is well known. The court sentenced Loomis to six years in prison solely based on the predictive justice algorithm results from the COMPAS software, already in the cognitive phase.(2) Loomis’ defense contested the decision, arguing that the use of COMPAS in sentencing violated the right to a fair trial on three grounds: the right to be sentenced to a personalized penalty; the right to a sentence based on clear and verifiable information, which was not available in this case as it was covered by industrial property rights; the improper use of gender data in determining the sentence. 

Following the Loomis case, the American non-profit organization ProPublica published a study on the functionality of the COMPAS software. The report highlighted a lack of transparency in how it operated. Additionally, the data processed by the program led to discriminatory results. For example, the software assigned a higher probability of recidivism to Black defendants compared to White defendants. The report brought public attention to the critical issues of a justice system based on AI mechanisms. In response to these concerns, some U.S. states have begun regulating the use of predictive justice tools, imposing greater transparency. 

In Europe, predictive justice develops in a very different legal and regulatory context compared to the United States. The European Union has a strong commitment to the protection of personal data through the General Data Protection Regulation (GDPR), which establishes strict limits on the use of sensitive data. This regulation makes the use of algorithms for predictive justice particularly complex, especially when dealing with personal or sensitive data without the explicit consent of the individuals involved. 

An interesting example of predictive justice application in Europe is the e-Justice project by the European Commission, which aims to digitize and modernize legal systems using advanced technologies. However, the European approach to predictive justice is generally more cautious, prioritizing a balance between technological innovation and the respect for fundamental individual rights, such as privacy and the right to a fair trial. 

Moreover, the European Union is addressing issues of transparency and accountability in the use of algorithms by adopting principles that promote algorithm intelligibility and the necessity to ensure that automated decisions are comprehensible to end users. 

 Regarding the responsibility profiles arising from the use of artificial intelligence systems, the AI Act outlines a distribution of responsibility among various parties involved, including the program developer, user, producer, and seller. That said, below are some examples of AI applications in predictive justice in the European Union (3): 

  • Case Law Analysis: AI can examine vast archives of rulings to identify patterns and trends, helping to predict the outcome of similar cases. For example, in France, the Predictive system, created in 2016, estimates the amount of compensable damage in commercial or intellectual property cases.

In 2020, constitutional bill no. 2585 regarding the “Charte de l’intelligence artificielle et des algorithmes”(4) was presented to the French Parliament, applying “to any system that consists of a physical entity (e.g., a robot) or virtual entity (e.g., an algorithm) and that uses artificial intelligence. The notion of artificial intelligence is understood here as an algorithm evolving in its structure. Such a system is not endowed with legal personality and therefore is not suitable to be the holder of subjective rights. The obligations deriving from legal personality fall upon the natural or legal person who hosts or distributes the aforesaid system, effectively becoming its legal representative” (Art. 1). 

In Art. 3, the principle is reaffirmed that “Every system is initially designed to ensure the fully effective application of the articles of the Universal Declaration of Human Rights of December 10, 1948,” thus reiterating that the introduction and use of automated systems must be done in harmony with and in respect of fundamental human rights. 

  • Decision Support: Algorithms can provide suggestions based on judicial precedents, assisting judges in making more informed and uniform decisions. Especially in legal contexts where similar cases frequently occur, in situations where the law itself imposes procedural simplification – making an in-depth analysis of the case facts unnecessary-, in minor cases or those of less social relevance.

In Italy, the Court of Appeal of Brescia has implemented a decision-making system through a database on “Labor Law” and “Commercial Law” to promote the sharing of jurisprudential orientations between the first and second levels of judgment. Also noteworthy is the “Predictive Jurisprudence” project developed by the Sant’Anna School of Advanced Studies in Pisa and the Court of Genoa, which analyzes decisions and procedural documents according to criteria and methodologies developed by the Observatory on Personal Injuries, applicable to litigation areas beyond non-monetary damages. 

There is also the case of the Key Crime software, designed by former Chief Assistant of the Milan Police Headquarters, Mario Venturi. Unlike the American experience, where software was based on so-called hotspot analysis to identify city areas at risk of theft and crime, Key Crime focuses on the perpetrator of the crime. Thanks to profiling carried out based on multiple data and information, it is possible to prevent any possible discrimination by the algorithm. Regarding the possible use of AI in decision-making, in the Press Release of the Council of Ministers no. 78 of 23.3.2024, it is established that in the administration of justice, the use of AI is permitted exclusively for instrumental and support purposes, such as organizing and simplifying judicial work, as well as for jurisprudential and doctrinal research, also aimed at identifying interpretative orientations. The decision on the interpretation of the law, the evaluation of facts and evidence, and the adoption of any measure, including the ruling, is always reserved for the judge.(5) 

This aligns with the principle of complementarity between humans and machines, already reaffirmed in ruling no. 2270 of April 8, 2019, of the Council of State, which stated that the use of an algorithm by the Public Administration, if its functioning is not knowable, violates the principles governing administrative activity, particularly impartiality, publicity, and transparency of the related administrative IT act. The decision is important as it highlighted the numerous advantages of automation in relation to serial and standardized procedures, involving the processing of large volumes of applications characterized by the acquisition of certain and objectively verifiable data, and the absence of any discretionary assessment. 

The use of such automated procedures complies with the principles of efficiency and cost-effectiveness of administrative action, which, in turn, derive from the constitutional principle of good administration (art. 97 Italian Constitution). Indeed, it brings numerous advantages, such as significantly reducing procedural time for purely repetitive operations without discretion, eliminating interference due to negligence (or worse, malice) of the human official, and consequently ensuring greater impartiality in automated decisions. 

Nevertheless, the Court held that the technical rule applied by the algorithm is still an administrative rule with full legal value and, as such, is subject to the principles governing administrative activity. Thus, the Public Administration must perform a preliminary role in balancing interests, without leaving any room for discretion. The algorithm is therefore identified as an “administrative IT act.” 

Consequently, on the one hand, the mechanism leading to the decision must be fully knowable (regarding its authors, the procedure used in processing and decision-making, the data used, and the priorities assigned). On the other hand, the correctness of the IT process is subject to judicial review by the administrative judge, who must have full knowledge of it to assess the legality of the automated decision.(6) 

  • Litigation Prevention and Management: Knowing in advance the probabilities of success of a case can deter parties from initiating legal actions with little chance of winning, thereby reducing the courts’ workload. The Court of Appeal of Venice, in collaboration with Ca’ Foscari University, has launched a project to make decisions on dismissals for just cause more predictable, discouraging unfounded litigation.

In the tax field, a project called Prodigit has been funded, aimed at creating algorithmic systems capable of analyzing laws, rulings, and doctrinal contributions to predict, with a sufficient degree of probability, the likely judicial orientation on a specific legal issue. 

In the Netherlands, the judicial system was reformed in 2002 with the introduction of the Rechtwijzer system, developed by Twente University and Hiil (Hague Institute for the Internationalisation of the Law), consisting of an international advisory platform based in The Hague. The online platform facilitates communication between the user, mediator, and legal assistant, providing services such as mediation and monitoring of the enforcement phase. In 2015, disputes related to property rights, condominiums, and personal services were introduced. 

2. AI-Assisted Mediation: A New Era for Dispute Resolution

The introduction of AI in traditionally human-driven mediation is one of the most intriguing and complex innovations in recent years. While AI promises to make the mediation process faster, more efficient, and more accessible, it also raises ethical, legal, and practical concerns that deserve attention. 

For example, AI can be used to analyze the positions of the parties involved, suggesting solutions that consider previous legal decisions and current regulations. Furthermore, technology can facilitate communication between parties and propose automatic resolution options, reducing the need for physical mediation and lowering associated costs. This would also allow mediation to spread as an alternative dispute resolution method, reaching distant locations and individuals. 

If well-designed, AI is not subject to human biases that may influence a traditional mediator. Algorithms can consider a broad range of factors without being swayed by personal opinions or emotions. In theory, this would ensure greater impartiality in the mediation process, reducing the risk of discrimination or unbalanced solutions. 

AI can analyze historical dispute data, examine behavioral patterns of the parties involved, and suggest tailored solutions. It can also provide insights into the likelihood of successful mediation, helping involved parties and their advisors assess realistic options, thereby encouraging settlement agreements. This approach, based on large datasets and machine learning techniques, allows for the creation of solutions that are more suited to the specific needs of the parties, potentially enhancing satisfaction. 

Additionally, AI can assist in training virtual mediators—software designed to help conflicting parties manage their interactions. These tools can simulate mediation scenarios and suggest solutions based on predictive models derived from past legal cases. 

Confidentiality and privacy are fundamental aspects of mediation. AI algorithms use encrypted protocols that can ensure the confidentiality of information throughout the process. However, it is crucial that such information is securely handled in compliance with privacy regulations (such as GDPR) and that AI systems are designed to prevent data breaches or cyber-attacks. 

However, the use of AI in mediation is not without risks and critical issues.
One of the main risks related to the adoption of AI in mediation is the loss of human interaction. Traditional mediation is largely based on the mediator’s ability to understand the emotions of the parties involved, establish an empathic connection, and adapt their techniques to dynamic and complex situations. While AI can support mediators, it may struggle to replicate the sensitivity and adaptability of a human professional, particularly in scenarios requiring psychological insight. The role of the mediator is not to determine who is right or wrong and apply the law but to help the parties find a mutually satisfactory agreement that goes beyond legal constraints. Only a human mediator can recognize the deeper, underlying needs behind each party’s position. Often, the most effective solutions come from the mediator’s creativity and flexibility—qualities that a rigid AI system may fail to accommodate, particularly when it comes to understanding and responding to human emotions. 

 Despite its potential for impartiality, AI is not free from errors. Algorithms are created and trained on historical data, which may reflect biases linked to past decisions or existing inequalities in society. If the data used to train an automated mediation system were not representative or were contaminated by biases, the proposed solutions might not be fair or could even discriminate against certain categories of people. Relying exclusively on AI could therefore risk perpetuating injustices instead of resolving them. 

Many people might be reluctant to use AI for mediation, fearing that the technology is not reliable enough or that the entire process may lack transparency. In particular, those who are not tech-savvy might feel disoriented or distrustful of the possibility of resolving a dispute through an automated system, fearing that decisions are too distant from their concrete needs or the emotional reality of the conflict. 

If AI becomes too widespread in the field of mediation, there is a risk that the role of traditional mediators would be diminished or even replaced. While AI can certainly assist mediators and enhance their tools, it is crucial to remember that mediation is also a human process, requiring active listening, intuition, and an understanding of emotions and interpersonal dynamics. The risk is that, in an attempt at automation, the essential art of mediation could be lost. 

An interesting case in this context is a recent decision by the UK Court of Appeal (7) regarding the patentability of inventions based on artificial intelligence, particularly in the field of emotional perception. This ruling raises important questions about who holds the rights to an invention created by an AI system and what role the human inventor plays in this scenario. 

The Court overturned a previous decision by the High Court, ruling that an artificial neural network (ANN) is a computer program, and that the invention of Emotional Perception is not patentable since it does not involve a technical contribution. This decision could be further appealed to the Supreme Court. At present, this ruling represents a significant step in the legal and ethical debate on intellectual property protection concerning AI-created inventions, particularly in a field as sensitive as emotional perception, where the impact on people can be substantial. There is also concern that patenting advanced AI technologies could lead to situations where companies monopolize crucial technologies, limiting access and use by other stakeholders. 

3. Conclusions: Striking a Balance

AI, mediation, and predictive justice represent a powerful combination of tools that could profoundly transform the legal world. While the benefits are clear, it is equally important to address ethical and legal concerns seriously, ensuring that technology is used to promote fair and impartial justice. With continuous technological evolution, we may be entering a new era of justice where efficiency and transparency can finally go hand in hand with the protection of fundamental rights. 

However, it is essential to continue discussing the ethical, legal, and social implications of these choices to ensure that technological progress happens in a responsible and fair manner. The issue of intellectual property in the AI era is bound to evolve, and decisions like the one in the UK will be crucial in defining the future of research and innovation. 

The introduction of artificial intelligence in mediation presents both great opportunities and significant risks. On the one hand, the possibility of reducing time, cutting costs, and democratizing access to dispute resolution is a positive step toward faster and fairer justice. On the other hand, it is crucial to monitor and regulate AI carefully to prevent potential discrimination, privacy violations, and the loss of the human element. 

The future of mediation will likely witness a synergy between AI and human mediators, where technology complements professional expertise to create a more efficient and comprehensive process while never sacrificing the value of human interaction and empathy.  

It is important, in general, to beware of a merely Cartesian application of written law, neglecting human perception, as Sophocles warns in the tragedy of Antigone. Ultimately, conflict resolution, especially in mediation, is not just a matter of data, but of people and their perceptions. Conflict resolution in mediation requires empathy, understanding, and the ability to see beyond mere legal formalities. 

From Antigone (Sophocles’ Tragic Story): “Unwritten, unshakable laws of the gods, which are not alive from today or yesterday, but from forever, nor does anyone know when they were born.” 


(1) La giustizia predittiva: potenzialità e incognite – MediaLaws di Arianna Maceratini 

(2)  State v. Loomis :: 2016 :: Wisconsin Supreme Court Decisions :: Wisconsin Case Law :: Wisconsin Law :: US Law :: Justia 

(3) Giustizia predittiva: la dignità umana faro per l’AI nei processi – Agenda Digitale

(4) Proposition de loi, n° 2585 – 15e législature – Assemblée nationale

(5) Press release Council of Ministers No. 78 23/04/2024 Palazzo Chigi 

(6) Council of State –Section.-VI-8-aprile-2019-n.-2270.pdf

(7) Comptroller – General of Patents, Designs and Trade Marks v Emotional Perception AI Limited – Find Case Law – The National ArchivesExamining patent applications involving artificial neural networks – GOV.UK


When one of your cases is in need of a construction expert, estimates, insurance appraisal or umpire services in defect or insurance disputes – please call Advise & Consult, Inc. at 888.684.8305, or email experts@adviseandconsult.net.

Leave a Reply