Search for:

Artificial Intelligence has officially arrived on the laptops and iPads of arbitration practitioners. Artificial Intelligences promises to make arbitrations significantly more efficient – lawyers might feel tempted to use Artificial Intelligence to draft parts of the written submissions and arbitrators might feel tempted to use Artificial Intelligence to draft parts of the awards. However, there are no rules for the use of Artificial Intelligence in arbitration.

The Silicon Valley Arbitration & Mediation Center (SVAMC) set up a working group to fill this gap. On 30 April 2024, this working group published the final Guidelines on the Use of Artificial Intelligence in Arbitration. The Guidelines aim for a fair, secure and balanced use of Artificial Intelligence in arbitration. The Guidelines are developed to complement national regulations on the handling of Artificial Intelligence with a specific focus on arbitration.

According to the Guidelines, Artificial Intelligence is defined as any computer system that “perform[s] tasks commonly associated with human cognition, such as understanding natural language, recognizing complex semantic patterns and generating human-like outputs.” The drafters intended to cover all current and future variations of Artificial Intelligent.

The Guidelines are divided into three parts. Part 1 applies to all participants in the arbitration process, Part 2 addresses the parties and party representatives and Part 3 focuses on arbitrators.

We select four Guidelines which seem to have particular importance:

GUIDELINE 2: Safeguarding confidentiality

“All participants in international arbitration are responsible for ensuring their use of AI tools is consistent with their obligations to safeguard confidential information (including privileged, private, secret, or otherwise protected data). They should not submit confidential information to any AI tool without appropriate vetting and authorization. Special attention should be paid to policies on recording, storage, and use of prompt or output histories and of any other confidential data submitted to the AI tool. Only AI tools that adequately safeguard confidentiality should be used with confidential information. Participants should assess the data use and retention policies offered by available AI tools and opt for secure solutions. Where appropriate, participants should redact or anonymize materials submitted to an AI tool.”

Guideline 2 contains a warning for all participants in the arbitration process, namely that many publicly available Artificial Intelligence tools – such as ChatGPR – record and store data that is provided by the user. That means, if a lawyer or arbitrator puts in confidential information, this information might become available to others. Therefore, Guideline 2 stipulates: only special Artificial Intelligence tools that adequately safeguard confidentiality should be used with confidential information.

GUIDELINE 3: Disclosure

“Disclosure that AI tools were used in connection with an arbitration is not necessary as a general matter. Decisions regarding disclosure of the use of AI tools shall be made on a case-by-case basis taking account of the relevant circumstances, including due process and any applicable privilege. Where appropriate, the following details may help reproduce and evaluate the output of an AI tool:

1. the name, version, and relevant settings of the tool used;

2. a short description of how the tool was used; and

3. the complete prompt (including any template, additional context, and conversation thread) and associated output.”

The drafters decided against a general duty to disclose that Artificial Intelligence was used. The reason for this hesitation is probably that already today Artificial Intelligence is used in various areas, e.g. the review of electronic communication might be powered by an e-discovery software that uses machine learning; the legal research could be powered by Artificial Intelligence in order to show only the most relevant results; the written submissions might be spell-checked by Artificial Intelligence, etc.

The Guidelines leave it to the arbitral tribunal to decide whether specific uses of Artificial Intelligence tools shall be disclosed. Such decision could be included in a Procedural Order or in Supplementary Procedural Rules.

While this hesitation to stipulate a general disclosure duty can be understood, one would have expected an obligation to disclose whether Artificial Intelligence tools have.

The Guidelines give an example of what information could be disclosed. The output of Artificial Intelligence depends significantly on the input. Therefore, appropriate disclosure of the output of Artificial Intelligence should include the relevant information to understand and verify the output generated by Artificial Intelligence. This consideration applies to both generative and elective tools.

GUIDELINE 4: Duty of competence or diligence in the use of AI Scope

“Party representatives shall observe any applicable ethical rules or professional standards of competent or diligent representation when using AI tools in the context of an arbitration. Parties shall review the output of any AI tool used to prepare submissions to verify it is accurate from a factual and legal standpoint. Parties and party representatives on record shall be deemed responsible for any uncorrected errors or inaccuracies in any output produced by an AI tool they use in an arbitration.”

Guideline 4 points out a weakness of Artificial Intelligence. Results generated by Artificial Intelligence – in particular large language models – are often factually or legally wrong. Therefore, the party representatives are responsible to monitor the output of Artificial Intelligence before using it in arbitration. A weakness of Artificial Intelligence are “Hallucinations”. “Hallucinations” describes wrong but reasonable sounding outcomes generated by Artificial Intelligence. The wrong outcomes are presented as a fact and not a possibility. Wrong but reasonable sounding outcomes can be minimized by “prompt engineering” (drafting better and more specific queries) and “retrieval-augmented generation” (providing the model with relevant source material at the outset).

GUIDELINE 6: Non-delegation of decision-making responsibilities

“An arbitrator shall not delegate any part of their personal mandate to any AI tool. This principle shall particularly apply to the arbitrator’s decision-making process. The use of AI tools by arbitrators shall not replace their independent analysis of the facts, the law, and the evidence.”

Guideline 6 clarifies that if the Parties choose to submit their dispute to an arbitral tribunal – and not to an Artificial Intelligence tool – the arbitrators shall not delegate their mandate to an Artificial Intelligence tool. According to the Subcommittee, the arbitrator must make the final decisions while “preserving the human element essential to the fairness and integrity of the arbitration.” The decision-making is a personal and non-delegable task. The duty of arbitrators is to be careful with delegating any tasks – even administrative ones – to Artificial Intelligence. The output of Artificial Intelligence must be reviewed in any case. Even if Artificial Intelligence evolves, the responsibility of arbitrators remains indispensable.

All in all, the Guidelines emphasize that using Artificial Intelligence entails the obligation to do so responsibly. The subcommittee considers Artificial Intelligence in arbitration useful to support the participants with the big amounts of data and information but definitely not with the actual decision-making process.

Author

Dr. Markus Altenkirch LL.M. is a member of Baker McKenzie's Dispute Resolution teams in Düsseldorf and London . Markus focuses on international arbitration and currently represents clients in ICC, DIS, LCIA, and HKIAC arbitrations. Markus primarily advises on Post-M&A as well as construction disputes. Moreover, Markus regularly advises on disputes in the Pharmaceutical industry. In 2021, Markus has started his own podcast series: #zukunft. Markus, and his colleague Lisa Reiser, interview leading arbitration practitioners and in-house lawyers on the future of international arbitration. Markus teaches at the University of Mainz and regularly publishes in the field of international arbitration. He is a contributor and editor for Global Arbitration News. Markus Altenkirch can be reached at Markus.Altenkirch@bakermckenzie.com and +49 211 311160 and +44 20 7919 1000.

Author

Raika Hoßbach is a member of the Dispute Resolution team at Baker McKenzie in Frankfurt. Raika is currently a law clerk and focuses on international arbitration. She can be reached at raika.hossbach@bakermckenzie.com.