Artificial Intelligence in Arbitration: Evidentiary Issues and Prospects

Katrina Limond | Global Arbitration Review

Introduction

The artificial intelligence (AI) genie is out of the bottle.

In November 2022, OpenAI released its AI chatbot, ChatGPT. As at June 2025, it has amassed over 122.6 million daily active users and processes more than 1 billion queries daily.[1]

GPT, the large language model (LLM) developed by OpenAI,[2] has wowed (and terrified) with its apparent displays of human competitive intelligence in a broad array of fields. GPT-4 scored in the top 10 per cent of a simulated bar exam, achieved a near-perfect Scholastic Aptitude Test science score and obtained similar results across a wide range of professional and college admission exams.[3] As has been widely publicised, Allen Overy Shearman Sterling LLP (A&O Shearman) was the first law firm to encourage its lawyers to use Harvey, a domain-specific AI for law firms, to automate and enhance various aspects of their work.[4]

Ask Harvey to prepare a memo on hot topics in international arbitration and it will prepare one within seconds (at a level of competence that will surprise many practitioners). Ask it to do so in the style of Donald Trump and you will be impressed (and amused) by the results.

This is a remarkable feat. As recently as October 2022, if you had received a coherent memo on a legal topic, this would have been proof of human involvement (if not quite human intelligence). That assumption is now, human prompting aside, obsolete.

Law-related use cases for AI have continued to proliferate and specialise. For instance, in 2023, A&O Shearman released (in partnership with Harvey) ContractMatrix, an AI-powered contract negotiation and drafting tool.[5] More recently, A&O Shearman and Harvey are launching a series of agentic, multi-step reasoning AI agents that can automate (with transparency and oversight) complex legal tasks, such as antitrust filings, loan reviews and fund formation.[6]

It is natural to wonder, how far will this go? What will AI models be able to do, and how will human beings fit in? Investment in generative AI continues apace, is projected to increase by 76.4 per cent from 2024 to 2025,[7] and research breakthroughs, particularly around the creation of autonomous agents, are continuing in the field.

Leading AI advocates have even expressed concerns about the speed of progress and how AI can be misused, from deep fakes to misinformation.[8] These developments seem certain to have profound implications for our society. It is naïve to assume that international arbitration could be immune.

Against that backdrop, this chapter considers some of the ways in which AI models (LLMs in particular) may transform the practice of international arbitration. Its focus is on evidence and how AI has started to, and may, in the imminent and conceivable future, change the way in which parties gather, analyse and present evidence. The chapter’s core hypothesis is that while AI will not replace lawyers, lawyers who use AI will replace those who do not.

But the road will not be without its speed bumps. AI comes with risks and has limitations. For instance, examples abound of ChatGPT (and competitor AIs) confidently asserting falsehoods, known as ‘hallucinations’. Users cannot blindly follow its outputs, as various sorry lawyers have discovered.[9] And so this chapter also considers some of the potential dangers and ethical issues arising from the use of AI, which are explored further in this chapter.

Finally, we must note (with humility) that AI’s development is fast-moving and unpredictable, as demonstrated by developments since this chapter’s first edition in October 2023. We do not purport to have a crystal ball, nor to have ‘answers’ to the ethical considerations its use will pose. We hope, however, that this chapter will be a thought-provoking addition to what will be an important conversation for the arbitration community in the years to come.

Use of AI for evidence – a bright future?

Given AI’s emerging capabilities in analysing and manipulating language – at speed and at scale – it is clear that AI can have powerful potential applications for identifying, finding and analysing evidence.[10]

This section considers just some of those potential applications, starting with the pre-dispute or early claim development phase, then the pleadings, fact and expert witness and discovery phases, right up to AI’s potential use at the final merits hearing. We also consider the possibility of AI-generated evidence. The potential risks and limitations of these uses are discussed below.

Pre-dispute and claim development

Imagine an AI tool that had access to your contracts, and (1) automatically reviewed all emails, documents and other data regarding the contract’s performance, (2) compiled evidence, and notified you, of possible breaches and (3) suggested next steps, including diarising deadlines for contractual notices, preparing draft claim letters or drafting talking points for inter-party meetings. Or perhaps you simply want AI to find and compile the relevant evidence so that external counsel can advise you how to proceed.

Sound far-fetched? Maybe less than you think. There are already AI tools that fulfil some of these functions and are transforming efficiency in the workplace.

In September 2023, Microsoft announced Copilot, which integrates GPT-based AI across the full suite of Microsoft 365 products, to help automate, accelerate and enhance many aspects of knowledge work. For example, you can ask Copilot to find and summarise all emails and documents relating to a specific topic or contract. It can also draft legal memos in Word based on prior documents or prompts, create slide decks from matter summaries and extract insights from spreadsheets.

Beyond Copilot, multiple legal technology companies claim to be able to (or to be developing tools that) automatically review contract compliance or identify, synthesise and organise evidence. While the efficacy of these tools must be assessed on a case-by-case basis, the overall promise of LLMs for this use case is highlighted in a study by LegalBench,[11] which compared the performance of various LLMs in carrying out legal reasoning tasks, such as issue spotting, interpretation and rhetorical tasks,[12] and found that LLMs perform well when identifying emerging legal issues from a set of facts.[13]

Not only can these tools enhance productivity and lead to cost gains (for both clients and lawyers), they may also provide strategic benefits. The sooner one masters the factual record, the earlier it is possible to identify the key issues in dispute, the strengths and weaknesses of one’s case and the ‘right’ litigation strategy. Parties and lawyers adept at using these tools would be able to act quicker and be better informed.

But what about when AI gets it wrong – for example, flagging irrelevant evidence or missing issues or deadlines? Having said that, human beings are not perfect in the evidence-collection process either – far from it. Where tools are at an acceptable baseline level of competence, the process could combine both AI and human intelligence, and be iterative – the lawyer reviewing an AI model’s initial selection of evidence, asking further questions of the AI and providing feedback, and the AI calibrating its results. So long as the lawyer ultimately reviews and assesses the reliability and relevance of the underlying documents flagged as evidence by AI, these risks should be manageable. As discussed below, the biggest dangers seem to arise when lawyers blindly rely on an AI model’s findings.

Pleadings

Consider the following scenario. It is 9am. You check your email. As expected, the other side’s statement of case arrived at 4.34am, a few hours after the midnight deadline. ‘Those poor souls’, you think to yourself as you sip your morning coffee. The submission, including witness statements, expert reports and exhibits, spans thousands of pages. You take a deep breath and brace yourself for the task ahead. You have a lot of reading to do.

But why not first upload the submission to your ‘AI Assistant’ and ask for:

  • a summary of all the key points, both for the submission as a whole and for each individual document;
  • initial ideas for counterarguments and evidence based on the documents your client has made available to you, as well as based on what is in the public domain and on legal databases; and
  • any relevant trends or tendencies you should be aware of for your tribunal (e.g., based on their publications, articles or publicly available awards) concerning their attitude towards certain issues to inform which evidence and arguments you should focus on.

Now, of course, any lawyer worth their salt will need to read, and re-read, the submissions. And clients would also need to agree to the use of AI. But a preliminary review would undoubtedly be useful. It would speed up understanding and help to identify evidence and ideas for your defence that may otherwise have been missed. Tools with such capabilities are under development, if not already in use.[14]

A generative AI model could, in theory, prepare the first draft of pleadings based on a lawyer’s bullet point outline of the key points, themes and evidence. The first draft would not be perfect but it could be an iterative process. The lawyer’s role in the drafting process could evolve into one similar to an editor: providing comments and asking the AI to write and rewrite accordingly.

Fact witness and expert evidence

Voice recognition AI could listen to fact witness interviews and integrate with a generative text AI to prepare first drafts. Can this help prevent distortion or contamination of witness memory?[15] Or, does it give rise to criticism that the statement is not in the witness’ own words,[16] but those of an AI model? Reasonable minds might differ on these questions, as well as on whether the use of AI is more or less preferable to lawyers in preparing first drafts. As far as England and Wales is concerned, it is suggested in the notes to the White Book that the Civil Procedure Rules Practice Directions do not permit the use of generative AI for witness statements, at least to the extent that it suggests substantive content.[17]

As for expert evidence, in US litigation, some experts have openly acknowledged using AI as a tool to inform their analysis, including the use of natural language processing tools and sentiment analysis.[18] The extent and nature of any AI use by experts will undoubtedly be a hot topic of cross-examination, given the expert’s obligation to provide their own, independent and impartial opinion.

Disclosure

Rather than running search terms, which often generate false positives, document requests can be run as prompts in an AI model and conduct first-level reviews to identify documents that may be responsive or privileged. The potential cost and time savings are obvious. Advanced AI-driven search technology is already being rolled out for litigation purposes, for example through Relativity aiR for Review, an AI tool that is integrated into the eDiscovery platform. Using this tool, it is possible to make use of key words, sentiment analysis and conceptual clusters alongside aiR to obtain insights and information from large data sets within a few hours, which would have taken weeks to establish using traditional review methods.[19]

Merits hearings

AI’s possible use during hearings is especially intriguing. Suppose AI listened to the hearing and reviewed the transcript in real time, while simultaneously looking for counterarguments and evidence – both on the record and in the public domain – to what opposing counsel, witnesses or experts were saying.

This kind of tool would be powerful but also dangerous. The risk of missing hallucinations in the heat of a hearing would be particularly acute. Counsel would need to be especially careful not to mislead the tribunal.[20]

Counsel would need to consider not only whether the AI would need to be disclosed and agreed to, but also whether it complies with any privacy or confidentiality obligations that the parties may have with respect to the hearing. In the United States, a tool was developed to help self-represented litigants contest traffic tickets.[21] It involved using smart glasses that recorded court proceedings and dictated responses to the defendant’s ear from a small speaker. But the developers faced threats of criminal charges for (1) illegally recording audio during a live legal proceeding (which is not permitted in federal court and is often prohibited in state courts) and (2) the unauthorised practice of law.

AI-generated evidence

We have discussed using AI to help identify, analyse and present evidence. But what about AI giving evidence directly? An AI robot did so in 2022 before a parliamentary inquiry in the United Kingdom.[22] Admittedly, this was in a particular context (a review of how AI might affect creative industries), but it raises the question as to whether parties in arbitration might one day attempt to use ChatGPT responses or other AI outputs as evidence (e.g., as opinion evidence). For now, at least, it seems far-fetched that a tribunal would accept a ChatGPT or other generative AI output as probative evidence, given the issue of hallucinations and that (as explained below) ChatGPT essentially operates as a sophisticated ‘predictive text’ machine. But might this change if hallucinations reduce over time and society becomes more acclimatised to trusting AI-generated responses?

As clients increasingly integrate AI into their businesses, other instances of AI-generated evidence will no doubt arise. For instance, AI-generated minutes of meetings where the parties dispute what was said. Should these be given equal evidential weight to human-generated minutes, or should weight depend on whether they have also been contemporaneously human reviewed?

What are the limitations and risks of using AI to handle evidence in arbitrations?

As discussed, AI offers many potentially advantageous and efficient ways of handling evidence in international arbitration. There are, however, important limitations and risks that cannot be ignored. We focus on four:[23] first, the tendency of AI models to generate hallucinations, errors and inaccuracies; second, client confidentiality and privacy issues; third, more general regulatory risks; and fourth, the risk of fabricated AI evidence.

Hallucinations, errors and inaccuracies

As noted above, AI will sometimes confidently assert incorrect answers. These hallucinations can even come with fabricated footnotes and sources, including entirely made-up case names. In a field where accuracy and credibility are paramount, this is clearly a cause for concern.

The reasons for AI hallucinations may be varied.

First, AI can only deliver results that are as good as the data it holds. Yet, data may be incomplete – for example, documents may be confidential, unavailable, not known about or not client-approved for upload onto a platform. By way of illustration, in the context of legal research, AI is currently less reliable where it does not have access to subscription-only legal research services or to confidential awards or documents from a commercial arbitration.[24] Data that is incomplete or selective (whether intentionally or unintentionally) will lead to unreliable results being produced by the algorithm.[25]

Conversely, more data does not always guarantee better performance. While it is generally thought that the bigger the pool of sample data, the more accurate the prediction of an AI model should be,[26] more data, especially if it is of low quality, irrelevant or inconsistent, may sometimes introduce more noise or challenges for AI.[27]

Data can also contain biases that affect the reliability of results. For instance, at A&O Shearman, we have encountered situations where AI trained mainly on US data misinterpreted UK documents. The AI labelled responses as ‘positive’ that UK readers would recognise as being passive aggressive. These cultural biases are particularly relevant for arbitration, given its international nature.

Second, even with high-quality data sets, hallucinations can still occur due to the way generative AI functions. Consider ChatGPT, which – in simple terms – operates by imitating how human beings have written in the past. It has been trained on billions of words of text to detect patterns in language.[28] When asked a question, ChatGPT assesses how those words (and similar ones) have appeared across its training corpus and predicts what the first word in its response should be. It repeats this prediction, one word at a time, until its response is complete.[29] Incredibly, this ‘copycat’ method leads to mostly accurate responses across a great number of fields (as its exam scores attest to). But it does not necessarily reflect actual understanding of what it is being asked, or what it is saying, and so the potential for convincing-sounding hallucinations exists.

Given the way ChatGPT operates, certain accuracy issues will be down to human input, as the answers presented by AI can only be as good as the question posed. Learning how to pose the right questions to put into an AI tool, or ‘prompting’, is becoming as crucial a skill for junior lawyers as drafting and researching.[30]

Notwithstanding these issues, it is important to remember that AI can still outperform many human lawyers, both in terms of accuracy and speed. Moreover, work is also being carried out to reduce the rate of hallucinations. For instance, data sets and models are being refined for particular use cases. AI models are also able to learn based on feedback given by human reviewers.[31] And while it may seem that ChatGPT is merely predicting, rather than reasoning, this may be incorrect. Microsoft’s researchers considered that ChatGPT’s exam performances indicated core mental capabilities such as reasoning, creativity and deduction.[32] OpenAI’s chief executive officer, Sam Altman, hopes that ChatGPT will evolve into a ‘reasoning engine over time’, enabling it to separate fact from fiction.[33] According to the Harvard Business Review, the cognitive reasoning abilities of agentic AI systems will allow for greater trustworthiness of output, as they are less likely to suffer from ‘hallucinations’ that are common to generative AI systems.[34]

Certain models such as OpenAI o1 and OpenAI o3-mini, have been specifically developed with the goal of enhancing reasoning capabilities, enabling them to generate a chain of thought prior to responding and addressing complex STEM or logic problems. This focus on reasoning operates in parallel with the objective of unsupervised learning, which seeks to improve the model’s accuracy and intuitive ‘understanding’ of the world.[35]

In sum, AI outputs require close scrutiny and verification. They should be treated as first drafts from an inexperienced junior, but one who prefers to concoct an answer rather than confess ignorance. Lawyers who understand this, as well as how to leverage AI’s talents, could enjoy significant efficiency and creativity gains. Those who blindly follow it are courting disaster.

Confidentiality and privacy

As explained above, AI requires large amounts of data to function well. The uses considered above presuppose that all documents disclosed in, relevant to and on the record in the arbitration have been uploaded onto the relevant AI platform. For many clients, this would (understandably) raise alarm bells in terms of confidentiality and data protection.[36] They might wonder who will have access to this data and for what purpose.

A well-publicised example shows the reasonableness of these concerns. Samsung employees uploaded lines of confidential code to ChatGPT on three separate occasions, which gave OpenAI access to that information, potentially enabling it to be used to train the LLM.[37]

Not all AIs operate this way, however. Some AI platforms, such as Harvey, use closed systems whereby any information submitted by a user is secured. Lawyers who make use of AI will need to be certain that the AI they use maintains the confidentiality of client data.

In addition to the possibility of accidental disclosure, volumes of confidential data stored together inevitably face the risk of cybersecurity breaches, especially for emerging technologies – recall the ‘video-crashers’ who disrupted Zoom conference calls during the early days of the pandemic.[38]

Precedent already exists for hacking arbitration matters. In 2015, while the arbitration between China and the Philippines regarding disputed territory in the South China Sea was pending, hackers accessed the website of the Permanent Court of Arbitration, reportedly through malware.[39] Similarly, in Caratube v. Kazakhstan,[40] the claimant in the proceedings managed to obtain confidential information that was leaked from the Kazakh government’s IT system.[41] Initiatives exist that seek to counter these risks, such as the Protocol on Cybersecurity in International Arbitration (updated in 2022) jointly released by the International Council for Commercial Arbitration (ICCA), the New York City Bar Association and the International Institute for Conflict Prevention and Resolution,[42] but these are not currently tailored to risks emanating from the use of AI.

Regulatory compliance

Where technology advances, regulation is sure to follow. Governments and legislators in many jurisdictions are advancing with AI-specific regulation.

This started slowly. In December 2018, the European Commission for the Efficiency of Justice of the Council of Europe adopted an ethical charter on the use of artificial intelligence in judicial systems and their environment.[43] The charter contained five broad principles:

  • respect for human rights;
  • non-discrimination;
  • quality and security;
  • transparency, impartiality and intellectual integrity; and
  • ‘under user control’.

The aim was for these principles to be subject ‘to regular application, monitoring and evaluation by public and private actors, with a view to continuous improvement of practices application’.[44] The ethical charter does not appear to have been widely adopted or evaluated as envisaged.

As incoming regulation increases, it is not a given that AI will be compliant with new regulations in all relevant jurisdictions. On 1 August 2024, the European Union’s Artificial Intelligence Act (the EU AI Act) came into force (save to note that most of its substantive provisions become applicable only at later dates).[45] The Act is the world’s first comprehensive regulation on AI. Given its detailed guidelines, which affect each AI risk category differently, many popular AI tools could be rendered non-compliant.[46] To help with the complexity and nuance of the issue, the European Union has developed an interactive compliance checker to ensure tools do not fall foul of the Act.[47] Some non-compliances may be fixable but others may threaten the underlying viability of the technology (at least according to some AI advocates).[48]

Notably, the EU AI Act categorises AI systems that facilitate the administration of justice and democratic processes as high-risk.[49] High-risk AI systems are subject to strict obligations before they can be put on the market, including: verification that they are based on high-quality data sets to minimise the risks of discriminatory outcomes; appropriate human oversight measures to minimise risk; and demonstration of high levels of robustness, security and accuracy.[50] Further, the EU AI Act outlines and aims to address the broad way in which AI can be used within the legal sphere and cracks down specifically on AI tools that directly affect charter rights, distinguishing them from AI tools used for administrative tasks.[51] These requirements, especially for ‘more involved’ legal AI tools, may (in the short term) slow the development and, thus, adoption of AI tools intended for legal use; in the longer term, however, these safeguards could potentially help drive adoption, if they bolster public confidence in these tools.

Another complexity is that approaches to AI regulation may differ widely across jurisdictions. For instance, the United Kingdom looks set to take a less formulaic (and potentially more lenient) approach than the European Union.[52]

The United States and China also appear to be adopting different approaches.[53] Political change adds to the uncertainty: in January 2025, President Trump issued an Executive Order aimed at eliminating restrictions on AI, which revoked President Biden’s previous order focused on the safe and responsible use of AI. The Trump order instructed federal agencies to review and potentially overturn any existing policies from the Biden administration that do not align with the goal of strengthening the United States’ global leadership in AI.[54] In July 2025, the White House released ‘America’s AI Action Plan’, which identifies over 90 federal policy actions that the Trump administration will adopt in coming months. Key policies include ‘removing onerous regulations that hinder AI development and deployment’, seeking private sector input on rules to remove, exporting US AI packages and promoting the rapid buildout of data centres.[55]

The onus will be on arbitration practitioners to ensure any use of AI complies with applicable regulations. This may extend beyond regulations in their home jurisdiction, to include the laws of the seat and place of enforcement. It is not inconceivable that parties may soon attempt to resist enforcement of an award on the grounds that the other side’s use of AI was illegal under one of the applicable laws to the arbitration. This risk currently looks most likely in certain jurisdictions if AI is being used as the decision maker (rather than as a tool for evidence). For example, both the French and Dutch Civil Procedure Codes require an arbitrator to be a human being.[56] This also calls into question whether arbitrators using AI should themselves disclose its use, to avoid concerns, or potential challenges, on the grounds that any decision-making has been delegated to AI.[57]

There is also clearly the potential for parties from different jurisdictions to be subject to very different rules regarding which AI tools they can and cannot use. This raises concerns regarding level playing fields and, thus, the legitimacy of the arbitration itself. In the interests of fairness, it may fall on the international arbitration community to develop common standards, or, at the very least, it may fall on arbitral tribunals to address these concerns in their first procedural orders. A potential difficulty faced by rule-making initiatives is the speed at which AI is developing, and the risk that as soon as rules are written, they may become obsolete.

At present, arbitration rules and institutions provide either no, or limited, guidance with respect to AI. For instance, most major arbitration rules do not currently address AI, either as a means to aid disclosure or more generally. An exception is the Silicon Valley Arbitration and Mediation Center (SVAMC), which published ‘Guidelines on the Use of AI in Arbitration’ in April 2024. The Guidelines provide that parties may have to disclose their use of AI tools, but that decisions as to whether such disclosure is required will be made on a case-by-case basis.[58] The Chartered Institute of Arbitrators (CIArb) published its ‘Guidelines on the Use of AI in Arbitration’ on 13 March 2025. The CIArb Guidelines encourage arbitrators to take a proactive role in managing the use of AI in arbitration. Key steps include discussing the use of AI early in the proceedings, appointing AI experts and requiring disclosure of the use of AI tools. Tribunals are also encouraged to address AI in their awards and consider non-compliance with AI-related directions when determining costs.[59]

The question of whether AI use is required to be disclosed is increasingly at the forefront of institutions’ and practitioners’ minds, with different jurisdictions and institutions adopting varying approaches.

In the United States, disclosure is often determined by individual judges on a courtroom-by-courtroom basis, with some courts requiring lawyers to certify whether AI was used in preparing submissions and to confirm human verification of any AI-generated content.[60] The Dubai International Financial Centre Courts takes a stricter approach, mandating early disclosure of AI use to both the court and opposing parties, including details about the AI tool and any known limitations or biases.[61] By contrast, the United Kingdom and New Zealand adopt a more relaxed stance. In these jurisdictions, there is generally no obligation to disclose AI use unless specifically requested by the court or tribunal.[62] Nevertheless, lawyers are expected to ensure the accuracy and reliability of any AI-generated content.

At the international level, a task force convened by the International Bar Association (IBA) is consulting to consider the use of AI in international arbitration.[63]

Many existing institutional rules and national arbitration laws encourage parties and tribunals to conduct arbitration efficiently, with regard to costs, with implicit, or sometimes explicit, reference to electronic disclosure. Parties may argue that these references implicitly permit the use of AI to assist in managing evidence. Article 1.5 of the IBA Rules on the Taking of Evidence in International Arbitration also provides tribunals with flexibility, where the applicable rules are otherwise silent, to ‘conduct the taking of evidence as it deems appropriate, in accordance with the general principles of the IBA Rules of Evidence’. Similarly, in 2022, the ICCA and the IBA’s Joint Task Force on Data Protection in International Arbitration referred to AI as a means of minimising disclosure of personal data.[64]

Apart from the few AI-specific regulations and guidelines enumerated above, there is currently limited guidance for the arbitration community on the evidential use of AI in arbitration. In light of the developments summarised above, and the relative regulatory uncertainty, the issue of the use of AI tools is likely to arise for agreement between parties, and failing that, before tribunals imminently. In addition to establishing what tools (if any) the parties are able to use, the question may arise as to whether parties are obliged to indicate when AI has been used to prepare witness statements or pleadings (as has been required in some US courtrooms, and as envisaged in certain circumstances by the SVAMC Guidelines).[65]

AI-generated forgeries

Another issue is the difficulty in identifying whether content has been written by human hand or AI. A number of tools exist to try to identify AI-generated text, but none are fully accurate, and current advice is to test text against a number of these tools to increase the chances of accurately identifying whether it is written by human hand or AI.[66]

This issue may become important should unscrupulous parties be tempted to use AI technology to create fake documentary, photographic or video evidence (or naïve parties be tempted into purchasing this evidence from unscrupulous operators).[67] Forgeries are not a novel issue,[68] especially digital forgeries in the era of low-cost image-editing software such as Photoshop. AI-generated images present a unique challenge as it can be impossible for the naked eye to detect forgeries.[69] As new forgery methods arise, forgery detection software follows (admittedly at a slightly slower pace). It remains to be determined how forged evidence can be safeguarded against, without leaving the door open for any shrewd defendant to argue that genuine, adverse evidence is in fact fake (the ‘deepfake defence’).[70]

International arbitration is arguably particularly vulnerable to deepfakes owing to the restricted discovery processes, the limited power of tribunals to sanction misconduct and the lack of public scrutiny to discourage dishonest behaviour.[71] Deepfakes threaten the integrity of remote interactions during arbitrations, such as videoconferencing and oral testimonies. Models can convincingly mimic tone, mannerisms and facial expressions, enabling someone to convincingly pose as a witness and make it appear as though an actual person is giving a genuine testimony, when in reality they are not.[72]

To account for this uncertainty in the international arbitration context, perhaps all digital evidence will need to be accompanied by a counsel’s statement of authenticity or expert opinion confirming that the content has been examined and is authentic and reliable.[73]

Conclusions

The adoption of AI models to analyse evidence is one of the less controversial uses of AI in the context of international arbitration (as compared to, say, decision-making).[74] In theory, although AI needs to be fully tested in practice, it has the potential to perform tasks such as document review, data extraction and anomaly detection with greater speed and accuracy than humans. By using AI, arbitration practitioners can demonstrably save time and money, reduce errors and focus on more strategic and creative aspects of their cases.

However, the benefits of AI for evidence management are not without challenges, including ensuring the reliability, security and ethical standards of the technology, as well as gaining the trust and acceptance of clients and lawyers who may be wary of delegating such a crucial aspect of their work to machines.[75] Achieving this optimal balance will require careful attention to the quality and validity of the data underlying AI systems, as well as to the legal and ethical implications of their use. Arbitration practitioners will also need to communicate effectively with their clients and colleagues about the benefits and limitations of AI and ensure that they retain oversight and accountability.

AI in this context is not a threat to the legal profession but rather an opportunity to enhance and transform it. AI cannot replace the human qualities that make lawyers valuable, such as critical thinking, creativity, empathy and advocacy. The focus now should be on understanding how users can achieve the best and most accurate results from AI. Lawyers who embrace AI as a tool to augment their skills and expertise will have a competitive edge over those who resist or ignore it.

Acknowledgements

The first edition of this chapter was written with Martin Magál, one of Slovakia’s most accomplished private-practice lawyers and leading arbitration practitioners, who is greatly missed. The authors are also grateful to Jason Rix, Zahra Abdul-Malik and Yasmin El-Briek for their assistance with the preparation of this chapter.


Endnotes

[1] See Shubham Singh, ‘ChatGPT Statistics (2025): DAU & MAU Data [Worldwide]’, 5 June 2025, available at https://www.demandsage.com/chatgpt-statistics/. Yet, this is only a fraction of overall AI use: see Evan Bailyn, ‘Top Generative AI Chatbots by Market Share, FirstPageSage (last updated 1 July 2025), available at https://firstpagesage.com/reports/top-generative-ai-chatbots/.

[2] ChatGPT’s latest iteration, GPT-4.5, was released in February 2025: see Open AI, Introducing GPT-4.5, available at http://openai.com/index/introducing-gpt-4-5/.

[3] ‘GPT-4 Technical Report’, OpenAI, 27 March 2023, p. 5, available at https://cdn.openai.com/papers/gpt-4.pdf.

[4] ‘A&O announces exclusive launch partnership with Harvey’, A&O Shearman, 15 February 2023, available at http://www.allenovery.com/en-gb/global/news-and-insights/news/ao-announces-exclusive-launch-partnership-with-harvey.

[5] ‘A&O launches SaaS partnership with Microsoft and Harvey’, A&O Shearman, 21 December 2023, available at https://www.aoshearman.com/en/news/ao-launches-saas-partnership-with-microsoft-and-harvey.

[6] ‘A&O Shearman and Harvey to roll out agentic AI agents targeting complex legal workflows’, 6 April 2025, available at https://www.aoshearman.com/en/news/ao-shearman-and-harvey-to-roll-out-agentic-ai-agents-targeting-complex-legal-workflows.

[7] Matt LoDolce, ‘Gartner Forecasts Worldwide GenAI Spending to Reach $644 Billion in 2025’, Gartner, 31 March 2025, available at https://www.gartner.com/en/newsroom/press-releases/2025-03-31-gartner-forecasts-worldwide-genai-spending-to-reach-644-billion-in-2025.

[8] Geoffrey Hinton (winner of the 2018 Turing Award, and one of the ‘Godfathers of AI’) has argued: ‘We need to find a way to control artificial intelligence before it’s too late.’ Even OpenAI’s chief executive officer (CEO), Sam Altman, has acknowledged: ‘We’ve got to be careful here. I think people should be happy that we are a little bit scared of this.’ See ‘Geoffrey Hinton: “We need to find a way to control artificial intelligence before it’s too late”’, El País, 12 May 2023, available at https://english.elpais.com/science-tech/2023-05-12/geoffrey-hinton-we-need-to-find-a-way-to-control-artificial-intelligence-before-its-too-late.html; Edward Helmore, ‘“We are a little bit scared”: OpenAI CEO warns of risks of artificial intelligence’, The Guardian, 17 March 2023, available at http://www.theguardian.com/technology/2023/mar/17/openai-sam-altman-artificial-intelligence-warning-gpt4. Advocates have also elaborated on concerns surrounding the creation of fake content that can ‘harm individuals in a targeted way’, explaining how deepfakes could have the potential to sabotage personal and professional reputations or cause psychological abuse. See Department for Science, Innovation & Technology, ‘International AI Safety Report 2025’, 18 February 2025, available at https://www.gov.uk/government/publications/international-ai-safety-report-2025. The ‘Global Risks Report 2025’ published by the World Economic Forum also identifies risks such as algorithmic bias, where AI systems can reflect the biases present in the data they are trained on, and the increasing risk of misinformation resulting from reliance on generative AI systems. See World Economic Forum, ‘Global Risks Report 2025’, 15 January 2025, pp. 10, 34–36, available at https://www.weforum.org/publications/global-risks-report-2025/ and Blackberry Research and Intelligence Team, ‘Deepfakes and Digital Deception: Exploring Their Use and Abuse in a Generative AI World’, Blackberry, 29 August 2024, available at https://blogs.blackberry.com/en/2024/08/deepfakes-and-digital-deception.

[9] A recently published database, accessible at https://www.damiencharlotin.com/hallucinations/, tracks legal decisions in cases where generative AI has produced hallucinated content, and resulting sanctions. In two recent UK judicial review cases, the High Court of England and Wales referred two solicitors to the Bar Standards Board and Solicitors Regulation Authority, respectively, following a failure to detect false citations generated by artificial intelligence (AI) tools in legal submissions (Ayinde v. Haringey and Al-Haroun v. QNB [2025] EWHC 1383 (Admin) 6 June 2025, https://www.judiciary.uk/judgments/ayinde-v-london-borough-of-haringey-and-al-haroun-v-qatar-national-bank/). Also in the United Kingdom, a litigant in person submitted fake case law to a tribunal (‘Litigant unwittingly put fake cases generated by AI before tribunal’, LegalFutures, 7 December 2023, accessible at https://www.legalfutures.co.uk/latest-news/litigant-unwittingly-put-fake-cases-generated-by-ai-before-tribunal). In a recent case, the judge made an exceptional wasted costs order of £4,000 against the legal counsel acting for a defendant council in light of their reliance on five fake legal cases in their submissions (DAT Green, ‘The use of artificial intelligence in courts: a warning’, Prospect, 8 May 2025, available at: https://www.prospectmagazine.co.uk/ideas/law/the-weekly-constitutional/69890/artificial-intelligence-in-courts-a-warning). Similar cases have arisen in the United States, where, for example, a Missouri-based technology business was ordered to pay additional damages of US$311,000 in an employee claim for briefing deficiencies and submitting fake cases generated by AI (‘AI-Generated Fake Case Law Leads To Sanctions In Wage Suit’, Law360, 13 February 2024, available at https://www.law360.com/articles/1797437/ai-generated-fake-case-law-leads-to-sanctions-in-wage-suit).

[10] We noted GPT-4’s impressive exam results above. Other AI models have also demonstrated an ability to outperform humans on specific legal tasks. For instance, a 2018 study commissioned by LawGeex, a Tel Aviv-based software company, pitted 20 experienced US-qualified lawyers against LawGeex’s AI algorithm to review non-disclosure agreements (NDAs), to issue-spot and annotate. The AI algorithm (having been trained on thousands of other NDAs) achieved a 94 per cent accuracy rate at spotting issues, in 26 seconds, compared with an average of 85 per cent accuracy for the lawyers in an average time of 92 minutes. See ‘Comparing the Performance of Artificial Intelligence to Human Lawyers in the Review of Standard Business Contracts’, LawGeex, February 2018, available at https://images.law.com/contrib/content/uploads/documents/397/5408/lawgeex.pdf. In February 2025, Linklaters updated its testing of two of the latest LLMs, OpenAI o1 and Google Gemini 2.0, in a benchmarking exercise comprising 50 questions from 10 different areas of the legal practice. Gemini 2.0 earned a score of 6.0 out of 10 while OpenAI o1 earned 6.4 out of 10, a significant improvement from the last benchmarking exercise in 2023 where the top scorer at the time only earned a score of 4.4. Despite these increasing scores, it is recommended that LLMs are not used without expert human supervision. See ‘UK – The LinksAI English law benchmark (Version 2)’, Linklaters, 10 February 2025, available at https://www.linklaters.com/en/insights/blogs/digilinks/2025/february/uk-the-linksai-english-law-benchmark-version-2.

[11] LegalBench is a collaboratively built legal reasoning benchmark consisting of 162 tasks across six different categories of legal reasoning. The benchmark was constructed by academics from several institutions, including Stanford University, University of Toronto and Harvard Law School.

[12] Neel Guha, et al., ‘A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models’, 20 August 2023, p 5, available at https://arxiv.org/pdf/2308.11462.

[13] id., p 15.

[14] Harvey has this functionality. CoCounsel by Casetext (https://casetext.com/) can also read, analyse and summarise legal documents and identify specific contract items in large databases of information (Mark Sullivan, ‘How Casetext is assigning every attorney an AI co-counsel’, Fast Company, 19 March 2024, available at https://www.fastcompany.com/91033227/casetext-thomson-reuters-most-innovative-companies-2024). Tools such as Chronologica review documents and pull out the key facts, events, individuals and conclusions (available at https://safelinkhub.com/chronologica-chronologies#Feature–Anchor).

[15] As investigated by the International Chamber of Commerce, Commission on Arbitration and ADR, Task Force report, ‘The Accuracy of Fact Witness Memory in International Arbitration: Current Issues and Possible Solutions’ (2020), available at https://library.iccwbo.org/content/dr/commission_reports/cr_0062.htm#TOC_BKL1_1_4.https://iccwbo.org/wp-content/uploads/sites/3/2020/11/icc-arbitration-adr-commission-report-on-accuracy-fact-witness-memory-international-arbitration-english-version.pdf.

[16] In the English court context, Practice Direction 32 (para. 18.1) and Practice Direction 57AC (applicable to the business and property courts) now require witness statements to be in witnesses’ own words wherever possible.

[17] ‘Practice Direction 57AC – Trial Witness Statements in the Business and Property Courts’, White Book 2025 (Volume 2), last updated 6 January 2025, available at https://www.justice.gov.uk/courts/procedure-rules/civil/rules/part-57a-business-and-property-courts/practice-direction-57ac-trial-witness-statements-in-the-business-and-property-courts.

[18] Anindya Ghose, ‘The Role of AI in Litigation and Competition Expert Analysis: A Conversation with Anindya Ghose’, 25 April 2025, available at https://www.compasslexecon.com/insights/publications/the-role-of-ai-in-litigation-and-competition-expert-analysis-a-conversation-with-professor-anindya-ghose.

[19] The acceptability of the use of AI will vary depending on different factors, such as the arbitral institution, the relevant procedural orders and the governing law of the seat (see further details below). In the United Kingdom, for example, currently, there is no English case law that addresses the use of generative AI for disclosure purposes. In the litigation context, Practice Direction 57AD, para 3.2(3), promotes the ‘reliable, efficient and cost-effective conduct of disclosure, including through the use of technology’, which may be used to suggest that the use of new technology may be encouraged in a disputes context.

[20] Perhaps by informing the tribunal if and how AI has been used in preparing materials filed with the tribunal, if required by the tribunal in its procedural directions. The Court of King’s Bench in Manitoba, Canada, has introduced a practice direction requiring this information for materials filed with that Court. See Practice Direction, ‘Court of King’s Bench of Manitoba Re: Use of Artificial Intelligence in Court Submissions’, 23 June 2023, available at http://www.manitobacourts.mb.ca/site/assets/files/2045/practice_direction_-_use_of_artificial_intelligence_in_court_submissions.pdf.

[21] Bobby Allyn, ‘A robot was scheduled to argue in court, then came the jail threats’, NPR, 25 January 2023, available at http://www.npr.org/2023/01/25/1151435033/a-robot-was-scheduled-to-argue-in-court-then-came-the-jail-threats.

[22] Martyn Landi, ‘Ai-Da robot makes history by giving evidence to parliamentary inquiry’, The Independent, 11 October 2022, available at http://www.independent.co.uk/news/uk/politics/house-of-lords-technology-liberal-democrat-b2200496.html.

[23] This chapter focuses on risks pertinent to the use of AI in the context of evidence. It therefore does not address more general risks to society and civilisation by human-competitive AI systems; for example, in the form of economic and political disruptions.

[24] Maxi Scherer, ‘Artificial Intelligence and Legal Decision-Making: The Wide Open?’, 2019, 36(5), Journal of International Arbitration 539, pp. 554–55. Concerns that confidentiality of awards and documents in commercial arbitration are a considerable obstacle for machine learning are capable of circumvention to an extent by use of anonymised versions of awards.

[25] Jenny Gesley and Viktoria Fritz, ‘Artificial “Judges”? – Thoughts on AI in Arbitration Law’, 2021, available at https://blogs.loc.gov/law/2021/01/artificial-judges-thoughts-on-ai-in-arbitration-law/.

[26] Maxi Scherer, supra note 24, p. 554.

[27] Michael Ansaldo, ‘When training AI models, is a bigger dataset better?’, Enterprise.nxt (20 July 2022), available at http://web.archive.org/web/20230318054634/https://www.hpe.com/us/en/insights/articles/when-training-ai-models-is-a-bigger-dataset-better-2207.html.

[28] ChatGPT-4 was apparently trained on approximately 300 billion words. See Aparna Iyer, ‘Behind ChatGPT’s Wisdom: 300 Bn Words, 570 GB Data’, Analytics India Magazine (15 December 2022), available at https://analyticsindiamag.com/behind-chatgpts-wisdom-300-bn-words-570-gb-data/.

[29] For a detailed explanation of how Chat GPT, and other large language models work, see Stephen Wolfram, ‘What is ChatGPT Doing . . . and Why Does It Work?’ (14 February 2023), available at https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/.

[30] OpenAI has published strategies and guidelines for how to ‘prompt’ ChatGPT-4 to increase the accuracy of results: see ‘GPT best practices’, OpenAI, https://platform.openai.com/docs/guides/gpt-best-practices.

[31] Notably, ChatGPT-4 is 40 per cent more likely to produce factual responses than GPT-3.5, based largely on increasing the data set and incorporating human feedback. See ‘GPT-4 is OpenAI’s most advanced system, producing safer and more useful responses’, OpenAI, available at https://openai.com/gpt-4. OpenAI described the more recently released preview of GPT-4.5 as the ‘strongest GPT model’: see ‘Introducing GPT-4.5’, OpenAI, available at >https://openai.com/index/introducing-gpt-4-5/. ChatGPT-5 is expected to launch soon, with lauded features including improved reasoning ability, an enhanced understanding of greater forms of media and higher reliability: see Saqib Shah, ‘ChatGPT 5 release date: what we know about OpenAI’s next chatbot as rumours suggest summer release’, The Standard, 6 June 2025, available at https://www.standard.co.uk/news/tech/chatgpt-5-release-date-details-openai-chatbot-sam-altman-b1130369.html.

[32] Sébastien Bubeck, et al., ‘Sparks of Artificial General Intelligence: Early experiments with GPT-4’, Microsoft, March 2023, available at http://www.microsoft.com/en-us/research/publication/sparks-of-artificial-general-intelligence-early-experiments-with-gpt-4/.

[33] Victor Ordonez, Taylor Dunn and Eric Noll, ‘OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: “A little bit scared of this”’, ABC News, 16 March 2023, https://abcnews.go.com/Technology/openai-ceo-sam-altman-ai-reshape-societyacknowledges/story?id=97897122.

[34] Mark Purdy, ‘What is Agentic AI, and How Will It Change Work’, Harvard Business review, 12 December 2024, available at https://hbr.org/2024/12/what-is-agentic-ai-and-how-will-it-change-work.

[35] ‘OpenAI o3-mini’, OpenAI, 31 January 2025, available at https://openai.com/index/openai-o3-mini/.

[36] A detailed analysis of the interplay between AI and data protection is beyond the scope of this chapter; however, further guidance can be found at (1) UK Information Commissioner’s Office, ‘Guidance on AI and data protection’, updated 15 March 2023, available at https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/; (2) Council of Europe, ‘Artificial Intelligence and Data Protection’, November 2019, available at https://edoc.coe.int/en/artificial-intelligence/8254-artificial-intelligence-and-data-protection.html.

[37] Lewis Maddison, ‘Samsung workers made a major error by using ChatGPT’, Techradar, 4 April 2023, available at http://www.techradar.com/news/samsung-workers-leaked-company-secrets-by-using-chatgpt.

[38] ‘Arbitration as a “new normal” may hinge on cybersecurity’, LexisNexis, 11 May 2020, available at http://www.lexisnexis.co.uk/blog/covid-19/arbitration-as-a-new-normal-may-hinge-on-cybersecurity.

[39] David Turner and Gulshan Gill, ‘Addressing emerging cyber risks: reflections on the ICCA Cybersecurity Protocol for International Arbitration’, Practical Law Arbitration Blog, 17 May 2019, available at http://arbitrationblog.practicallaw.com/addressing-emerging-cyber-risks-reflections-on-the-icca-cybersecurity-protocol-for-international-arbitration/; Luke Eric Peterson, ‘Permanent Court Of Arbitration Website Goes Offline, With Cyber-Security Firm Contending That Security Flaw Was Exploited In Concert With China-Philippines Arbitration’, Investment Arbitration Reporter, 23 July 2015, available at http://www.iareporter.com/articles/permanent-court-of-arbitration-goes-offline-with-cyber-security-firm-contending-that-security-flaw-was-exploited-in-lead-up-to-china-philippines-arbitration/.

[40] Caratube International Oil Company LLP v. Republic of Kazakhstan (I), ICSID Case No. ARB/08/12.

[41] John Choong, et al., ‘Data protection and cybersecurity in international arbitration remain in the spotlight’, Freshfields Bruckhaus Deringer, 2023, available at http://www.freshfields.com/en-gb/our-thinking/campaigns/international-arbitration-in-2023/data-protection-and-cybersecurity-in-international-arbitration-remain-in-the-spotlight/.

[42] The 2020 Cybersecurity Protocol for International Arbitration, jointly released in 2019 by the International Council for Commercial Arbitration (ICCA), New York City Bar Association and the International Institute for Conflict Prevention and Resolution, provides helpful guidelines and examples of information security measures that may be adopted and tailored to a particular arbitration, available at https://cdn.arbitration-icca.org/s3fs-public/document/media_document/ICCA-reports-no-6-icca-nyc-bar-cpr-protocol-cybersecurity-international-arbitration-2022-edition.pdf.

[43] European Commission for the Efficiency of Justice, ‘European ethical Charter on the use of Artificial Intelligence in judicial systems and their environment’, December 2018, available at https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c, summarised at >http://www.biicl.org/documents/10496_merethe_eckhardt.pdf.

[44] id., at p. 6.

[45] ‘AI Act implementation timeline’, European Parliament, 2025, available at https://www.europarl.europa.eu/RegData/etudes/ATAG/2025/772906/EPRS_ATA(2025)772906_EN.pdf.

[46] Rishi Bommasani, et al., ‘Do Foundation Model Providers Comply with the Draft EU AI Act?’, Stanford Center for Research on Foundation Models, 15 June 2023, available at https://crfm.stanford.edu/2023/06/15/eu-ai-act.html.

[47] ‘EU AI Act Compliance Checker’, EU Artificial Intelligence Act, available at https://artificialintelligenceact.eu/assessment/eu-ai-act-compliance-checker/.

[48] While commenting on the European Union’s proposed AI legislation, OpenAI CEO Sam Altman noted: ‘If we can comply, we will, and if we can’t, we’ll cease operating . . . We will try. But there are technical limits to what’s possible.’ See Billy Perrigo, ‘OpenAI Could Quit Europe Over New AI Rules, CEO Sam Altman Warns’, Time, 25 May 2023, available at https://time.com/6282325/sam-altman-openai-eu/.

[49] EU Artificial Intelligence Act (Regulation (EU) 2024/1689), Official Journal, version dated 13 June 2024, ‘Section 1, Classification of AI Systems as High-Risk’, available at https://artificialintelligenceact.eu/section/3-1.

[50] id., see ‘Section 2, Requirements for High-Risk AI Systems’, such as Article 14 (Human Oversight) and Article 15 (Accuracy, Robustness and Cybersecurity).

[51] id., Recital 59.

[52] The UK government’s AI Regulation White Paper of August 2023 and its written response of 6 February 2024 make it clear that the United Kingdom has no plans to introduce broad, overarching AI legislation any time soon. Rather, the White Paper and the Response advocate for a ‘principles-based framework’, which would allow current regulators in specific sectors to interpret and implement these principles as they oversee the development and use of AI in their respective areas. See Department for Science, Innovation & Technology: ‘A pro-innovation approach to AI regulation’, 3 August 2023, available at https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper; and ‘A pro-innovation approach to AI regulation: government response’, 6 February 2024, para. 38, available at https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response.

[53] Alex Engler, ‘The EU and U.S. diverge on AI regulation: A transatlantic comparison and steps to alignment’, The Brookings Institution, 25 April 2023, available at http://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/; Matt O’Shaughnessy and Matt Sheehan, ‘Lessons From the World’s Two Experiments in AI Governance’, Carnegie Endowment for International Peace, 14 February 2023, available at >https://carnegieendowment.org/2023/02/14/lessons-from-world-s-two-experiments-in-ai-governance-pub-89035.

[54] Nick Reem, ‘AI Watch: Global regulatory tracker – United States’, White & Case, 25 March 2025, available at https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united-states.

[55] The White House, ‘White House Unveils America’s AI Action Plan’, 23 July 2025, available at https://www.whitehouse.gov/articles/2025/07/white-house-unveils-americas-ai-action-plan/.

[56] See French Civil Procedure Code, Article 1450, available at https://www.legifrance.gouv.fr/codes /texte_lc/LEGITEXT000006070716/; Dutch Code of Civil Procedure, Article 1023, available at htpps://www.dutchcivillaw.com/legislation/civilprocedure044.htm.

[57] Judges have admitted to using AI tools to render rulings: see Luke Taylor, ‘Colombian judge says he used ChatGPT in ruling’, The Guardian, 3 February 2023, available at http://www.theguardian.com/technology/2023/feb/03/colombia-judge-chatgpt-ruling; and Ben Cost, ‘Judge asks ChatGPT to decide bail in murder trial’, New York Post, 29 March 2023, available at https://nypost.com/2023/03/29/judge-asks-chatgpt-for-decision-in-murder-trial/.

[58] Silicon Valley Arbitration & Mediation Center, ‘Guidelines on the Use of Artificial Intelligence in Arbitration’ (1st edition 2024), Guideline 3, available at https://svamc.org/wp-content/uploads/SVAMC-AI-Guidelines-First-Edition.pdf.

[59] Chartered Institute of Arbitrators, ‘Guideline on the Use of AI in Arbitration (2025)’, 13 March 2025, Section 4, available at https://www.ciarb.org/media/m5dl3pha/ciarb- guideline-on-the-use-of-ai-in-arbitration-2025-_final_march-2025.pdf.

[60] An online list of such requirements is maintained and updated by Responsible AI in Legal Services, available at https://rails.legal/resources/resource-ai-orders/.

[61] ‘Practical Guidance Note No. 2 of 2023 Guidelines on the use of large language models and generative AI in proceedings before the DIFC Courts’, 21 December 2023, Principles 1 and 4.

[62] ‘Guidelines for use of generative artificial intelligence in Courts and Tribunals: Lawyers’, Courts of New Zealand, 7 December 2023, available at http://www.courtsofnz.govt.nz/going-to-court/practice-directions/practice-guidelines/all-benches/guidelines-for-use-of-generative-artificial-intelligence-in-courts-and-tribunals, see also UK Courts and Tribunals Judiciary, Artificial Intelligence (AI): Guidance for Judicial Office Holders, 12 December 2023, p. 2. See also Elizabeth Chan and Katrina Limond, ‘Striking the Right Balance: Approaching Disclosure of Generative AI-Assisted Work Product in International Arbitration’ Belgian Review of Arbitration, 2024, available at https://www.kluwerarbitration.com/document/kli-ka-b-arbitra-2024-1-004?q= STRIKING%20THE%20RIGHT%20BALANCE:%20APPROACHING%20DISCLOSURE %20OF%20GENERATIVE%20AI-ASSISTED%20WORK%20PRODUCT%20IN%20INTERNATIONAL%20ARBITRATION.

[63] This follows a report issued by the International Bar Association (IBA) and the Center for AI and Digital Policy (CAIDP) last September titled ‘The Future is Now: Artificial Intelligence and the Legal Profession’, exploring the transformative impact of AI on the legal profession and providing insight into the governance of AI technologies in the legal practice (Selim Alan, ‘IBA and CAIDP release groundbreaking report The Future is Now: Artificial Intelligence and the Legal Profession’, IBA, 30 September 2024, available at https://www.ibanet.org/IBA-and-CAIDP-release-report-The-Future-is-Now-Artificial-Intelligence- and-the-Legal-Profession#:~:text=Marc%20Rotenberg%2C%20CAIDP%20Executive%20 Director,speedy%20evolution%20of%20AI%20tools).

[64] ICCA-IBA Joint Task Force on Data Protection in International Arbitration, ‘Roadmap to Data Protection in International Arbitration’, The ICCA Reports No. 7, 2022, available at https://cdn.arbitration-icca.org/s3fs-public/document/media_document/ICCA_Reports_No_7_ICCA-IBA_Joint_Task_Force_on_Data_Protection_in_International_Arbitration.pdf, pp. 48 and 61.

[65] US federal judges have ordered that lawyers declare use of generative AI tools in cases that appeared before them: Sara Merken, ‘Another US judge says lawyers must disclose AI use’, Reuters, 8 June 2023, available at http://www.reuters.com/legal/transactional/another-us-judge-says-lawyers-must-disclose-ai-use-2023-06-08/; ‘Order on Artificial Intelligence’, United States Court of International Trade of the Honorable Stephen Alexander Vaden, Judge, 9 June 2023, available at http://www.cit.uscourts.gov/sites/cit/files/Order%20on%20Artificial%20Intelligence.pdf.

[66] Ron Karjian, ‘How to detect AI-generated content’, Techtarget, 2 August 2023, available at http://www.techtarget.com/searchenterpriseai/feature/How-to-detect-AI-generated-content; Justin Gluska, ‘How To Check If Something Was Written with AI (ChatGPT)’, Goldpenguin, updated 11 September 2023, https://goldpenguin.org/blog/check-for-ai-content/.

[67] Microsoft President Brad Smith has described deep fakes as his biggest concern around AI: Diane Bartz, ‘Microsoft chief says deep fakes are biggest AI concern’, Reuters, 25 May 2023, available at http://www.reuters.com/technology/microsoft-chief-calls-humans-rule-ai-safeguard-critical-infrastructure-2023-05-25/.

[68] In the 2001 International Court of Justice case Maritime Delimitation and Territorial Questions between Qatar and Bahrain (Qatar v. Bahrain), 2001 ICJ Rep. 40 (16 March 2001), 139 ILR 1), Qatar’s memorial was initially accompanied by 82 forged documents: ‘Qatar v. Bahrain: massive forgeries’, Chapter 8 in W Michael Reisman and Christina Skinner, Fraudulent Evidence before Public International Tribunals: The Dirty Stories of International Law (Cambridge University Press, 2014).

[69] As evidenced by a German artist, who, in April 2023, won the Sony World Photography award for a photograph that he later revealed to be an AI creation: see Paul Glynn, ‘Sony World Photography Award 2023: Winner refuses award after revealing AI creation’, BBC News, available at http://www.bbc.co.uk/news/entertainment-arts-65296763; Guy Alon, Azmi Haider and Hagit Hel-Or, ‘Judicial Errors: Fake Imaging and the Modern Law of Evidence’, 21 UIC Rev. Intell. Prop. L. 82 (2022), available at https://repository.law.uic.edu/cgi/viewcontent.cgi?article=1512&context=ripl.

[70] Victor Tangermann, ‘Reality Is Melting as Lawyers Claim Real Videos Are Deepfakes’, Futurism, 10 May 2023, available at https://futurism.com/reality-melting-lawyers-deepfakes. See also ‘Elon Musk’s statements could be “deepfakes”, Tesla defence lawyers tell court’, 27 April 2023, The Guardian, available at https://www.theguardian.com/technology/2023/apr/27/elon-musks-statements-could-be-deepfakes-tesla-defence-lawyers-tell-court. In Huang v. Tesla, a wrongful death lawsuit stemming from a fatal crash involving a Tesla Model X, the court was not convinced that the vague possibility of the existence of deepfakes involving Musk was sufficient to dismiss video evidence (Sz Huang et al v. Tesla Inc. dba Tesla Motors, Inc. et al., Superior Court of California, County of Santa Clara, State Civil Lawsuit No. 19CV346663).

[71] Holland & Knight, ‘Deepfakes Could Be Arbitration’s Next Gen AI Shake-Up’, 13 May 2024, available at https://www.hklaw.com/en/news/intheheadlines/2024/05/deepfakes-could-be-arbitrations-next-gen-ai-shakeup.

[72] Leonardo F Souza-McMurtrie, ‘Arbitration Tech Toolbox: Deepfakes and the Decline of Trust’, Kluwer Arbitration Blog, 4 October 2023, available at https://arbitrationblog.kluwerarbitration.com/2023/10/04/arbitration-tech-toolbox-deepfakes-and-the-decline-of-trust/#:~:text=Deepfakes%20compromise%20the%20authenticity%20of,in%20fact%2C%20they%20are%20not.

[73] Alon, Haider and Hel-Or, supra note 68. Furthermore, in certain US courts, judges may require disclosure on the use of AI within proceedings. See Chan and Limond, supra note 61.

[74] Jordan Bakst, et al., ‘Artificial Intelligence and Arbitration: A US Perspective’, Dispute Resolution International, Vol. 16, No. 1 (May 2022), available at http://www.cov.com/-/media/files/corporate/publications/2022/05/artificial-intelligence-and-arbitration-a-us-perspective_bakst-harden-jankauskas-mcmurrough-morril.pdf.

[75] A recent study found that only 51 per cent of lawyers thought generative AI should be used for legal work: see ‘Generative AI could radically alter the practice of law’, The Economist, 6 June 2023, available at http://www.economist.com/business/2023/06/06/generative-ai-could-radically-alter-the-practice-of-law.


When one of your cases is in need of a construction expert, estimates, insurance appraisal or umpire services in defect or insurance disputes – please call Advise & Consult, Inc. at 888.684.8305, or email experts@adviseandconsult.net.

Leave a Reply