Indian High Courts Ban AI Tools for Judicial Work, Citing Risks to Human Judgment

3 Sources

Share

The Punjab and Haryana High Court has prohibited judicial officers from using AI tools like ChatGPT, Gemini, and Copilot for writing judgments and conducting legal research. Gujarat High Court released a comprehensive AI policy that bans AI use in decision-making while allowing limited administrative applications, highlighting the judiciary's cautious approach to integrating AI in the justice system.

Punjab and Haryana High Court Bans AI Tools in Judicial Work

The Punjab and Haryana High Court has issued a directive prohibiting judicial officers from using AI tools in judicial work, specifically targeting platforms like ChatGPT, Gemini, and Copilot for writing judgments and conducting legal research. The instruction, communicated through a letter from the Registrar-General on Monday, was sent to all district and sessions judges across Punjab, Haryana, and Chandigarh

1

. The Chief Justice emphasized that any violation of these instructions will be viewed seriously, marking a firm stance against the use of artificial intelligence tools in core judicial functions

2

.

Source: DT

Source: DT

Gujarat High Court Releases Comprehensive AI Policy for Courts

The Gujarat High Court unveiled a detailed AI policy at a conference of district judiciary judges on Saturday, establishing strict boundaries for AI in judiciary operations. The policy explicitly prohibits the use of AI for any form of decision-making, judicial reasoning, order drafting, judgment preparation, bail sentencing considerations, or any substantive adjudicatory process

1

. According to the policy, artificial intelligence should enhance the speed and quality of justice delivery rather than replace judicial reasoning, reflecting institutional concerns about the erosion of human judgment

3

.

Source: MediaNama

Source: MediaNama

Strict Verification and Accountability Requirements

The Gujarat High Court's AI policy mandates rigorous verification of all AI-generated outputs, including citations, legal propositions, summaries, translations, and transcripts against primary sources. Judges and court officers retain full personal responsibility for all outputs, including AI-assisted work, and must disclose AI use in research or briefs. The policy makes clear that AI use does not mitigate liability for inaccuracies, misconduct, or negligence, ensuring accountability remains with human decision-makers

3

. This framework underscores the judiciary's constitutional mandate to deliver justice using human reasoning and accountability.

Confidentiality and Data Protection Concerns

The Gujarat policy addresses critical concerns around confidentiality and data protection, barring users from entering confidential or case-related data into public AI tools. It restricts public tools to general, non-case-specific tasks, while even approved enterprise AI systems cannot process sensitive data such as witness identities, confidential court information, or special category personal data. All AI-related processing must comply with the Digital Personal Data Protection Act, 2023, and users must not repurpose personal data beyond their original judicial purpose

3

. Violations may result in disciplinary action and civil or criminal liability.

Growing Pattern of High Court Restrictions

The Punjab and Haryana directive follows a pattern emerging across Indian High Courts taking a cautious approach to integrating AI in the justice system. The Kerala High Court similarly barred judges in district courts from using AI tools to arrive at any findings, reliefs, orders, or judgments in July 2025

3

. This coordinated resistance reflects broader institutional concerns about over-reliance on AI, potential data bias, and the risk of discriminatory outcomes in the adjudicatory process.

Unaddressed Concerns About Data Quality and Bias

Despite the comprehensive nature of Gujarat's AI policy for courts, critics note significant gaps in addressing how underlying data quality shapes AI-assisted administrative outcomes. A DFL report from February 2026 highlights that judicial datasets are often incomplete, inconsistent, and unrepresentative, increasing the risk of skewed outputs. The report warns that even efficiency-focused tools for case scheduling or prioritization can produce discriminatory outcomes when built on limited or uneven metadata, potentially disadvantaging certain litigants or case categories. The Gujarat policy does not mandate dataset audits, bias testing, or representativeness checks, nor does it include system-level safeguards such as audit logs, explainability requirements, and institutional technical oversight needed to detect and correct such risks

3

.

Implications for India's Justice System

These High Court restrictions signal a deliberate choice to prioritize judicial integrity over technological efficiency in core adjudicatory functions. For legal professionals and litigants, this means drafting judgments and legal research will continue to rely exclusively on human expertise, preserving the constitutional requirement for reasoned, accountable decision-making. However, the policies leave open questions about how India's judiciary will balance the potential benefits of AI in administrative tasks against the risks of bias and data quality issues. As more High Courts develop their frameworks, the legal community will be watching whether a unified national approach emerges or whether regional variations in AI adoption create inconsistencies in justice delivery across different jurisdictions.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo