The Gujarat High Court on Saturday introduced a comprehensive policy regulating the use of Artificial Intelligence (AI) in judicial and court administration, making it clear that AI cannot be used for any form of judicial decision-making, reasoning, or preparation of orders and judgments.
The policy on Use of Artificial Intelligence in Judicial and Court Administration underscores that while AI offers significant benefits in terms of efficiency and research, the core functions of adjudication, i.e., evaluation of evidence, interpretation of law, and delivery of reasoned judgments, must remain exclusively within the domain of the human mind. It cautions that unchecked reliance on AI may erode public trust and compromise the constitutional mandate of human-centric justice. It states:
“Artificial intelligence shall never be employed for any form of decision-making, judicial reasoning, substantive order drafting or judgment preparation… or any substantive adjudicatory process.”
The High Court has categorically prohibited the use of AI in decision-making, drafting of judgments, bail or sentencing considerations, and any substantive adjudicatory process.
Limited Use Permitted, But With Strict Safeguards
While drawing a hard line on adjudicatory functions, the Court has allowed limited use of AI as a support tool for research and administrative efficiency. AI may assist in legal research, identifying precedents, and improving drafting, translation and case management, but only subject to strict human verification of all outputs.
The Court has also raised concerns about risks associated with AI, including hallucinated legal citations, bias, data breaches, and confidentiality violations, mandating that all AI-generated content must be verified against authoritative sources before use.The policy clarifies:
“Artificial intelligence may be employed for legal research, subject to verification by applying mind.”
It mandates that all AI-generated content, whether case law, statutory references, or summaries, must be independently verified against authoritative sources before reliance.
Prohibition on Feeding Important Data to AI
The policy prohibits feeding any confidential case-related data, personal information of litigants, or privileged material into public AI tools, in line with data protection obligations under the Digital Personal Data Protection Act, 2023.
“Confidential case information… shall never be entered into any public AI tool.”
This includes litigant data, evidence, and privileged communications, aligning with obligations under data protection law. The policy applies across the High Court and subordinate judiciary, covering judges, court staff, legal assistants, and even interns. It also extends to the use of AI tools on both official and personal devices for court-related work.
Accountability and Consequences for Misuse
Emphasising accountability, the High Court has clarified that reliance on AI cannot be used as a defence for errors or misconduct, and any violation of the policy may invite disciplinary action in addition to civil or criminal liability. It states:
“The use of AI does not constitute a defence to a finding of error, misconduct, or professional negligence…Violation of any provision of this Policy shall be treated as misconduct and shall attract appropriate action including departmental or disciplinary proceedings under applicable service rules. The above consequences are in addition to, and not in derogation of, any civil or criminal liability that may arise under any applicable law, including the Information Technology Act, 2000, the DPDP Act, 2023, or the Bharatiya Nyay Sanhita, 2023.”
The policy firmly establishes that AI cannot reduce or shift human responsibility in judicial functioning. Judges remain fully accountable for every order and judgment, regardless of whether AI tools are used in the process. It extends this responsibility to court staff as well, making it clear that any AI-generated content used in official work must be verified and owned by the person using it.
The policy also requires that if legal assistants or researchers use AI tools, the concerned judge must be informed.
POLICY ON THE USE OF ARTIFICIAL INTELLIGENCE Click here


