loader image

‘AI Is Powerful but Risky’: Experts Debate Accountability, Disclosure in Construction Arbitration

‘AI Is Powerful but Risky’: Experts Debate Accountability, Disclosure in Construction Arbitration

AI accountability in construction arbitration

The Society of Construction Law, India, in collaboration with the Society of Construction Law, Mumbai, hosted Construction Law 3.0: From Concrete to Code, a one-day international conference on construction law and arbitration held on April 25, 2026, at Trident BKC, Mumbai. The conference brought together leading professionals to discuss emerging trends at the intersection of construction law and technology. Session 2, which was based on the theme “AI in the Toolbox: Agentic Bots, Hallucinations, and the Future of Construction Disputes.”

  1. Gaurav Juneja Partner, Khaitan & Co., India
  2. Pratyush Sharma Director, Accuracy, India
  3. Joseph PothenkattaPartner, SPI Legal Office, India
  4. Wee Jian Ang – Partner, Pinsent Masons, Singapore
  5. Arjun DoshiHead, IP Legal, RMW, Adani Group, India

Opening the session, Mr Gaurav Juneja highlighted the dual nature of artificial intelligence in dispute resolution. While AI offers “speed, efficiency, and analytical support,” he cautioned that it also raises concerns around “reliability, accountability, disclosure, and procedural fairness,” noting that the real question is how stakeholders can engage with AI responsibly.

Setting the stage for the in-house perspective, Mr Juneja directed the discussion to Mr Arjun Doshi on how organisations can identify the risk of AI hallucinations at an early stage before such errors become embedded in claims and legal strategy.

Responding, Mr Doshi explained that AI hallucinations differ from traditional search errors, as large language models generate responses based on statistical prediction rather than verified data. He illustrated this with a practical example where an AI tool inserted a “14-day notice period” into a contract that only required notice “as soon as practically possible,” highlighting how such outputs can appear polished yet be fundamentally incorrect.

 He cautioned that unchecked reliance on such outputs can have significant consequences, including flawed claim quantification, premature settlements, reputational damage to counsel, and wasted costs when errors are discovered later. Referring to global instances, he noted that courts have penalised lawyers for citing non-existent case law generated by AI, reinforcing the risks of unverified usage.

On mitigation, Mr Doshi emphasised the need for internal safeguards, including training in-house teams to verify AI outputs against underlying contracts and authentic legal databases. He noted that organisations are also attempting to formalise verification processes and ensure human oversight before relying on AI-generated material.

The discussion then moved to accountability, with Mr Juneja raising the question of who bears responsibility when AI-generated errors persist despite such safeguards. Mr Doshi acknowledged that liability currently rests largely with lawyers, as courts have imposed costs for reliance on flawed AI outputs. However, he pointed out that the accountability of AI vendors remains an open question, particularly given contractual limitations on liability.

Addressing emerging technologies, Mr Juneja also raised the use of agentic AI systems capable of operating without human intervention. Mr Doshi clarified that such tools are not yet being deployed in high-stakes construction disputes due to lack of trust and the complex, nuanced nature of contractual obligations. He emphasised that human intervention remains indispensable, particularly in critical tasks such as issuing notices.

____________________________________________________

Building on Mr Arjun Doshi’s remarks on risk and accountability, Mr Juneja noted that once AI begins influencing claims, submissions, and expert materials, the focus inevitably shifts to transparency before tribunals and opposing parties. Against this backdrop, he posed the question to Mr Joseph Pothenkatta on whether arbitration is moving towards a duty to disclose the use of AI in preparing legal materials.

 Responding, Mr Joseph Pothenkatta pointed out that several international arbitration institutions have already begun addressing AI usage through evolving guidelines. However, he clarified that disclosure is not uniform. He said, “If you are using AI only for the purpose of research, you don’t need to disclose it. But definitely, if expert evidence, if evidence on quantification… in those circumstances on which the tribunal is totally reliant, you need to mandatorily disclose.” He emphasised that professional responsibility ultimately remains with lawyers, adding that misuse of AI cannot shift liability away from the user.

Taking the discussion further, Mr Juneja contextualised a practical dilemma faced by practitioners, where AI outputs are thoroughly cross-verified and no apparent error remains. He questioned whether, in such circumstances, there still exists an obligation to disclose AI usage, particularly when the process mirrors traditional reliance on junior associates.

Answering this, Mr Joseph Pothenkatta offered a nuanced perspective, explaining that AI operates as a “black box,” unlike human inputs, which can be explained and scrutinised. He noted that if a lawyer cannot explain how a conclusion has been reached, reliance on such output becomes problematic. The nature of the tool, the data it relies on, and the extent of its role in shaping conclusions would all be relevant in determining whether disclosure is required.

The discussion then shifted towards procedural implications. Framing the next question, Mr Juneja highlighted that once disclosure enters the picture, it inevitably affects discovery, document production, and even cross-examination. He raised the question of how disclosure of AI usage could alter the scope of these processes.

 Responding from an international arbitration perspective, Mr Wee Jian Ang noted that disclosure is not always mandatory but may often be strategically prudent. He explained that while routine drafting assistance may not require disclosure, expert reports influenced by AI could invite deeper scrutiny. In such cases, parties may face questions regarding the data sets used, prompts given to AI systems, and the extent of human verification involved. He suggested that disclosure decisions may ultimately be guided as much by strategy as by obligation, particularly to avoid credibility challenges during cross-examination.

Continuing on the procedural impact, Juneja then framed a follow-up concern, whether such disclosures could expand the scope of discovery and complicate proceedings.

Mr Ang acknowledged that disclosure could indeed open the door to broader document production requests, including inquiries into underlying data, prompts, and verification processes. He cautioned that this raises issues of confidentiality, privilege, and proportionality under established arbitration rules, while also increasing the overall complexity and duration of proceedings.

Broadening the lens further, Mr Juneja questioned whether the integration of AI enhances procedural fairness or instead creates an additional layer of disputes.

Mr Ang responded that while AI may improve analytical capabilities, it is likely to increase procedural burdens, extend timelines, and generate additional interlocutory disputes. However, he noted that this added complexity may still serve a purpose if it allows tribunals to better interrogate and understand AI-generated outputs.

The discussion then turned to the future of dispute resolution, and Mr Gaurav Juneja raised a forward-looking question on the possibility of AI arbitrators and whether awards rendered by such systems would be enforceable. Responding, Mr Joseph Pothenkatta expressed scepticism from an Indian legal standpoint, stating, “In India it may not be enforceable if it’s a purely AI arbitrator, it may be treated as against public policy. You may have a ground of denying the enforcement of that award.”

Mr Wee Jian Ang suggested that while errors on merits may not invalidate such awards, challenges could arise on procedural grounds. He noted, “One may say that a breach of natural justice has occurred and that will constitute a ground for challenge,” particularly where AI-generated decisions fail to adequately address party arguments or reasoning.

 Extending the discussion, Gaurav Juneja flagged the emerging developments around AI-driven adjudication, noting that institutional frameworks are already experimenting with AI arbitrators trained on historical data.

He then pivoted the discussion towards expert evidence, highlighting that many of the concerns around AI ultimately converge at the stage of expert testimony, which must remain transparent, independent, and capable of scrutiny. Framing the next issue, he asked Mr Pratyush Sharma at what stage an expert should use AI in their work and whether such usage ought to be disclosed while preparing an expert report.

Mr Pratyush Sharma highlights that while AI can assist in early-stage tasks such as document sorting and chronology preparation, core expert functions must remain human-led. He explained, “AI can help an expert find the haystack faster, but it still can’t tell us which… really matters,” emphasising the need for thorough validation before reliance.

On disclosure, Mr Sharma supported transparency, stating that even when AI is used, it must be validated and disclosed to maintain credibility. He cautioned that without proper verification, experts risk merely endorsing machine-generated outputs rather than providing independent opinions.

As the session drew to a close, Mr Juneja posed a broader question to whether AI would simplify or complicate construction disputes. Mr Arjun Doshi acknowledged its benefits, noting, “It will definitely make arbitration processes cost-efficient but it will definitely not substitute human intervention.”

While Mr Ang observed that AI may streamline preparation but complicate hearings, Mr Pothenkatta took a more optimistic view, stating, “AI will simplify things it will make it far more easier with the right tool.”

However, Mr Sharma offered a more cautious perspective, illustrating how AI outputs can shift based on prompts and concluding, “AI amplifies your thoughts. You indicate something in the prompt, it picks it up and realizes that this is what the user is expecting from me. So it amplifies that and you then think, oh, it’s understood me. But that’s not actually what it should do. So if the AI tells you, sorry, I do not have data to justify an answer, then you know the AI is doing its job.”