International Arbitration: AI Arbitrator Launched by the AAA-ICDR

February 4, 2026

Reading Time : 10 min

The American Arbitration Association – International Centre for Dispute Resolution (AAA-ICDR) has launched an artificial intelligence (AI) arbitrator tool (the AI Arbitrator) that will reportedly deliver speedy, efficient and trustworthy awards in document-only, two-party construction cases. Whilst AI tools, including those in dispute resolution, offer unprecedented opportunities for augmenting human abilities, these tools also present issues which parties to international arbitration should be aware of, so they can implement risk management measures in order to maximise the benefits of using institutional AI tools.

How Does the AI Arbitrator Work?


Andrew Barton, Vice President at the AAA-ICDR who oversees the development and deployment of the AI Arbitrator, told us that: “[The AAA-ICDR is] approaching AI Arbitrator as a platform, not a one-off tool. This initial release establishes the core capabilities and controls, and it positions [the AAA-ICDR] to expand responsibly as additional use cases and case complexities are brought into scope.”

The AI Arbitrator, a visionary idea, was developed by the AAA-ICDR in collaboration with QuantumBlack, AI by McKinsey (McKinsey & Company acquired London-based QuantumBlack in 2015). It was trained on more than 1,500 construction arbitration awards, benefitting from actual arbitrator reasoning from the AAA-ICDR construction caseload. The AI Arbitrator was further fine-tuned with expert-labelled examples and human arbitrator input. The tool was tested iteratively, with continuous feedback provided by construction lawyers and trained arbitrators from the AAA-ICDR Construction Panel. From its inception, through to its development and release, and ongoing monitoring and support, the AI Arbitrator incorporates and implements the AAA-ICDR’s core values of efficiency, transparency and reliability, under the supervision of the institution’s AI Governance Committee. As Mr Barton explained further, “For nearly a century, the AAA-ICDR has focused on improving the speed, quality, and integrity of dispute resolution. AI Arbitrator reflects that same mission—pairing responsible innovation with strong process controls and human accountability, so parties can benefit from efficiency while preserving the core safeguards of arbitration.”

In order for the AI Arbitrator to be used, both parties must agree to refer their dispute to it—preserving the principle of consent, one of the main pillars of arbitration. If there is no agreement, the case proceeds as a traditional AAA arbitration, with a human arbitrator. The option to refer disputes to the AI Arbitrator is currently available only for low value, document-only construction cases that do not involve live witnesses or complex factual or legal issues. The AAA-ICDR plans to expand the AI Arbitrator offering to other industries and to higher value claims this year.

The AI Arbitrator works on the AAA-ICDR’s Case Management Platform, via a prompt-driven interface. The tool is able to summarise claims, submissions and information that are uploaded, with the parties to the arbitration given the opportunity to review and comment on the output. The AI Arbitrator then “analyses” the evidence, “evaluates” the merits of claims, applies the relevant law and “uses legal reasoning” to generate recommendations (see below in relation to risks of anthropomorphising AI).

The AI Arbitrator also proceeds to draft a proposed award.

A human arbitrator reviews the draft award, makes any necessary revisions and issues a final, binding award. A human arbitrator is involved throughout the process, as the AAA-ICDR states that each output by the AI Arbitrator is validated for logic, fairness and legal soundness by an experienced human arbitrator, trained in oversight of the AI Arbitrator. Mr Barton at the AAA-ICDR confirmed that “[The AAA-ICDR] built AI Arbitrator around the core principles of arbitration—party consent, process integrity, and accountability. AI assists with defined workflow tasks, and a human arbitrator remains responsible for the decision and the signed award.”

As discussed below, the AI Arbitrator works faster than a human-heavy arbitration, with the AAA-ICDR currently projecting that a typical low value, documents-only case (which would usually take 60 to 75 days until an award is issued) will need only around 30 to 45 days for a final award when the parties use the AI Arbitrator.

What Are the Advantages of Using the AI Arbitrator?


The AAA-ICDR states that the AI Arbitrator is aimed at resolving disputes for the better and that it will deliver the “holy grail” of fast, low-cost and high-quality arbitration. As matters stand, the AI Arbitrator is used in cases where “speed and efficiency are paramount” and, reportedly, trialling shows 20–25% faster resolution times and 35% or greater cost savings.

If trustworthy, speedy and cost-effective arbitration can be achieved more readily with the AI Arbitrator as opposed to a human arbitrator, the advantages of pursuing AI arbitration will naturally include cost and time savings. Using the AI Arbitrator may also result in further benefits such as reducing the risk of parties’ not pursuing a claim, or settling a claim unfavourably, owing to cost pressures.

As machines have no downtime (other than for routine and vital upgrades, or in case of an incident), the speed with which the AI Arbitrator would work to deliver results may also be of benefit—and possibly of benefit not only to the parties to the arbitration at issue, but in general to parties using international arbitration. It may resolve a high number of disputes quickly, freeing up time for the human arbitrators, counsel and parties to concentrate on factually and legally more complex cases.

As a tool in support of dispute resolution (and possibly en route of being developed further, especially as more parties start using it and with the benefit of exciting new technology such as agentic AI), the AI Arbitrator could provide invaluable assistance before the award is ultimately issued by the human-in-the-loop arbitrator.

What Are the Risks of Using the AI Arbitrator and How Can They Be Mitigated?


The AI Arbitrator is a novel development—and goes further than any of the more widely spread instances of AI tools assisting the tribunal and the parties, where guidance has already been issued by some of the arbitral institutions and leading third parties such as the Chartered Institute of Arbitrators (CIArb), the Silicon Valley Arbitration & Mediation Center (SVAMC), the SCC Arbitration Institute, the Vienna International Arbitration Centre (VIAC) and the AAA-ICDR itself. Whilst the pioneering work of the AAA-ICDR should be recognised and commended in this area, users of the AI Arbitrator would be well advised to consider potential risks, and how to mitigate them, when deploying this tool.

First, in terms of the legality of submitting disputes to an AI Arbitrator, there may be a risk of a party alleging that the arbitral tribunal has exceeded its powers by using AI tools to assist arriving at the award. The requirement for both parties to agree to the AI Arbitrator helps to reduce the risk of such allegations, and hence threats to the integrity and reliability of the arbitral process.

Second, in terms of the enforceability of an award drafted with the help of the AI Arbitrator, there may be concerns that recognition and enforcement could be challenged under the New York Convention, for example by developing arguments that the arbitral award was not “made by arbitrators” (as required under Article I.2), or is contrary to the public policy of the country in which a party wishes to recognise and enforce the award (under Article V.2(b)). The AAA-ICDR states that the final award will be made by a human arbitrator, revising the draft award prepared by the AI Arbitrator to the extent necessary. This may help reduce the risk of successful challenges under Article I.2 of the New York Convention but may not eliminate it altogether. Although many domestic laws do not currently prohibit arbitrators using AI tools, “public policy” in several jurisdictions usually encompasses a wide range of issues that could touch on AI adjudication. Before agreeing to refer their dispute to the AI Arbitration, users of AI in international arbitration should consider at the outset which countries they might want to enforce an award in and check the legal and regulatory landscape in those countries with respect to AI tools.

Third, the parties and the (ultimate human) tribunal may be subject to laws that impact their use of the AI Arbitrator. A plethora of existing laws globally regulate the use of AI tools, including in relation to personal data, cybersecurity, intellectual property (IP), equality and discrimination. New emerging laws specifically target AI in a rapidly changing regulatory landscape. For example, the European Union (EU) AI Act imposes obligations on providers (i.e., developers) and deployers (i.e., users) of AI tools, including a myriad of obligations in relation to “high-risk” AI systems. One of the categories of AI systems designated as “high-risk” is “AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution”. The majority of obligations in relation to “high-risk” AI systems under the EU AI Act are on the providers of such systems—and focus on issues such as the quality of training, validation and testing data sets, transparency, accuracy, robustness, cybersecurity and implementing risk management and quality management systems. The obligations on deployers of “high-risk” AI systems include implementing the human oversight measures indicated by the provider of the AI system, ensuring input data is relevant for the intended purpose, monitoring the operations of the AI systems, keeping logs, reporting any ‘serious incidents’ or malfunctions to the providers and/or regulator, and carrying out a Data Protection Impact Assessment (DPIA) (under the General Data Protection Regulation (GDPR)) where applicable. Before parties elect to refer their dispute to the AI Arbitrator, they should consider if the EU AI Act, or any other relevant law, applies to them and what measures they (or others in the dispute chain) might need to take to comply. This is particularly important in an environment where laws and regulations are rapidly changing to keep pace with technology.

Fourth, there are cybersecurity risks with AI tools which the parties should be aware of and take into account when deciding whether to use the AI Arbitrator. In particular, in a cyber attack, AI tools may be susceptible to producing output that has evaded the tool’s filters and guardrails, disclosing confidential information which should not have been disclosed. Users of the AI Arbitrator should be particularly mindful of implementing measures to enhance their cybersecurity posture, to guard against external as well as insider threats.

Fifth, the AI Arbitrator presents risks inherent to all AI tools, concerning accuracy and reliability. It is well known that AI tools come with transparency and accountability challenges, as it is not always clear how the output has been derived based on the input provided. There is also a risk of anthropomorphising the AI—the AI Arbitrator cannot in reality think in the way humans think—and there is a risk that parties to the arbitration may attribute, and expect, human qualities to, or from, a tool which is not human. The AAA-ICDR states that it has implemented measures to address those risks: it has invested in the data sets on which the model was trained and tested; detected and corrected biases; included regular feedback by construction specialists; and designed a human-in-the-loop framework which helps ensuring that any output by the AI Arbitrator is checked, considered and confirmed by a human. Against that background, it should not be forgotten that humans are not perfect either—and arbitrators strive to address similar challenges in relation, for example, to biases, fairness and justice when attending hearings and delivering awards—deploying their skills, as discussed further below. (Note in that context that the AAA-ICDR Arbitration Rules include an exclusion of liability for arbitrators.) We will issue another briefing once there is more reliable data on the perceived quality of awards, and especially in light of the AAA-ICDR’s intention to expand the AI Arbitrator offering this year.

The Human Element of International Arbitration


The development of the AI Arbitrator by the AAA-ICDR is exciting—and the speed with which such tools are advancing is exponential. Whilst technology affects users of international arbitration, counsel and tribunals, the human element of dispute resolution continues to be crucial, as discussed further in last year’s Akin Arbitration Lecture. Deciding which party would be successful in an arbitration usually involves selecting from a spectrum of options informed by law, fact and context. Conflicting factual and legal statements are weighed by human arbitrators, relying on their wisdom, courage and discernment of nuance, which an AI tool may not necessarily yet possess. Hearings themselves are deeply human encounters, requiring empathy, credibility assessments and reasoned judgment. Although certain AI tools may be designed to please their users, in general AI tools do not “value”, or “care” about, the consequences of their output in the same way that humans do. As long as users of the tools are aware of these constraints, the assistance that AI provides will continue to be helpful.

Next Steps


The AI Arbitrator is now live. Where two parties have a low-value construction dispute with no complex factual or legal issues (and which can be decided on the documents only), they might explore whether to refer that dispute to the AI Arbitrator. With possibly varying risk appetites, both parties will have to agree that the benefits of the AI Arbitrator outweigh the risks, as mitigated. Sufficient uptake and positive feedback will no doubt play a role in the next steps for AI in international arbitration—including for the expansion planned by the AAA-ICDR to other industries, higher value and more complex disputes, as well as potentially the development of similar tools by other arbitral institutions.

The Akin team will keep you abreast of these developments.

Share This Insight

© 2026 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.