2 Sources
2 Sources
[1]
Why AI shouldn't be used even to decide 'simple' court cases
In just a few years, generative artificial intelligence (gen AI) has brought about significant changes in many industries from healthcare to education, entertainment to finance, and even law. The use of gen AI in court verdicts poses significant risks to justice. Erroneous outcomes generated from "hallucinated" information, discriminatory decisions and lack of transparency are all concerns when this technology is introduced to courtrooms. But already a number of judges around the world have used it in decision-making and judgment writing. This is why some jurisdictions, including the UK, have issued guidelines for judges regarding AI use. Read more: 'Hallucinated' cases are affecting lawyers' careers - they need to be trained to use AI Broadly, the guidelines suggest judges might use AI as a tool to conduct preparatory works such as drafting summaries of long documents, translating legal documents, identifying legal precedents or enhancing readability of documents. They recommend against the application of it for core judicial functions, including decision-making. Recently, some senior judicial leaders have opined that AI might be used to decide "low-stakes" or less-complex cases with adequate precautions, such as keeping a human judge in the loop. In a November 2024 speech, the UK's second most senior judge, Geoffrey Vos, spoke of a "spectrum" of legal decisions that AI might soon make, or help make. Vos said the use of AI for "broadly mechanical decisions, like those about the amount of a pension or benefits, or the calculation of personal injury damages and loss of earnings" would likely save money and time. But he called for discussion on whether such use would violate essential human rights. A year later, Vos again called for "serious debate" about what rights humans should have protected in this context. And he urged that AI be "used responsibly, effectively and safely in legal systems and processes". AI has long been discussed as a threat to jobs and livelihoods. But what's the reality? In this new series, we explore the impact it is already having on different occupations - and how people really feel about their AI assistants. A number of jurisdictions are testing or using AI in such "mechanical" cases already. Estonia uses a semi-automated small-claims system in civil proceedings for monetary claims up to €7,000 (£6,100), with human clerks overseeing the process. Frankfurt District Court in Germany has tested an AI system named Frauke to deal with air passenger rights lawsuits. Frauke analyses earlier cases and rulings to create pre-configured draft judgments. Judges assemble final verdicts from these texts following their ruling, significantly reducing the time spent drafting. Taiwan piloted an AI-powered tool to assist courts by producing ruling notices for Driving Under Influence cases, or aiding and abetting in fraud cases. The AI system generates a complete draft ruling including the facts, legal reasoning, citations and final verdict. The judge reviews this draft and, upon approval, can issue it as the official judgment, with or without modifcations. It is evident from these examples that the key motivation to replace human judges in a certain category of cases is efficiency. As a result, a few other jurisdictions are also exploring the scope of integrating gen AI to adjudicate certain litigation without human judges. The cost of using gen AI as judge Courts are overburdened, and technology like gen AI promises consistency and efficiency. But it would mark a significant change of centuries-old practice. And it risks undermining what some legal scholars argue is a fundamental principle of justice: the right to be judged by a human being. Court adjudication is not only about reaching a decision. It is about a holistic and fair process that includes the right to be heard - presenting defence, weighing competing narratives, and exercising judgment in light of law and equity. Algorithmic tools, no matter how advanced, do not hear or "understand" even their own output, let alone human values or changing social contexts. Gen AI cannot recognise suffering, credibility, remorse or vulnerability like a human. That alone makes it unfit to sit in a judge's seat. Categorising cases as simple or complex may look pragmatic, but it is both legally and morally dangerous. What counts as a "simple, routine or mechanical" case is itself a human decision. Legal disputes over compensation or benefits may appear straightforward on paper, yet carry significant consequences for the person bringing the case. Allocating such cases as appropriate for algorithmic adjudication risks creating a two-tier justice system - in which one group of citizens gets to present their case before a human judge, while others are handled by machines. Only the former, I would argue, are exercising their right to a fair hearing and trial before an independent and impartial tribunal, as protected under Article 6 of the European Convention on Human Rights. Additionally, the efficiency argument may become illusory. Algorithmic systems like gen AI require continuous human oversight, auditing and rectification. Hallucination or mistakes, whether from flawed design or biased training data, can completely negate the claimed benefits. Public trust matters in all legal systems. If people lose trust in automated decisions, appeals will increase - adding to the existing backlog of cases. Emerging technology such as gen AI may be suitable to manage court administration and reducing clerical burdens. But substituting human judges, even in supposedly low-stakes cases, undermines basic principles of justice. Efficiency should not come at the expense of the values the justice system exists to protect.
[2]
AI's Double Edge: Why Courts Won't It Let Pass Judgment AI's double edge: Why courts won't it let pass judgment
Courts around the world are drawing a careful line around artificial intelligence (AI), welcoming it as a tool for efficiency but resisting any role in actual decision-making. The emerging consensus is less about banning AI outright, while using AI more as an assistant while preserving the human core of justice in an era of smarter machines that still make a lot of mistakes that need to be cross-checked and verified. Nowhere is this clearer than in India, where courts have taken some of the most explicit positions globally. The reason: courts worldwide are struggling with backlogs, procedural delays, and limited resources. Properly deployed, AI could streamline workflows, reduce administrative burdens, and improve access to justice. But they also hallucinate, and can make a mockery of the judicial system. Hence, courts are striking a balance. Just this month, the Gujarat High Court issued a sweeping policy barring judges and court staff from using AI at any stage of judicial decision-making, including drafting orders or preparing judgments. The Gujarat HC's policy authorises the use of technology for administrative efficiency and legal research, provided that all outputs are strictly verified by human personnel. But it explicitly prohibits AI from being involved in judicial decision-making, sentencing, or the drafting of final judgments to preserve the integrity of human conscience in the law. Safeguards have been mandated to prevent data breaches and the use of biased or fabricated information often generated by automated tools. Ultimately, the guidelines ensure that while digital innovation may assist court operations, personal accountability and constitutional values remain the cornerstone of justice. The logic is that even indirect reliance on AI could shape judicial reasoning in ways that are difficult to detect or challenge. The policy, which focuses on having humans in the AI loop, mandates that a qualified human officer must always review and verify any AI-generated output before it is acted upon, filed, published, or communicated. The policy also dictates that officers must read the original document before acting upon an AI-generated summary of it, and that machine-translated text must be verified by a person competent in the source language, and AI-assisted transcriptions must be certified by a responsible officer before being used as a formal record. According to the policy, violations will attract appropriate action, which specifically include departmental or disciplinary proceedings under the applicable service rules. This is not an isolated move. Last July, the Kerala High Court had banned the use of AI tools in district court decision-making. And early this month, the Punjab and Haryana High Court restricted their use in both judgment writing and legal research. Even the Supreme Court of India, which has embraced AI for transcribing Constitution Bench proceedings, has kept AI tools firmly outside the realm of judicial reasoning. This, even as the Government of India has allocated a total of ₹7210 crore for the e-Courts Phase III project, of which ₹53.57 crore is specifically earmarked for the integration of AI and Blockchain technologies across High Courts in India. For instance, the Supreme Court has partnered with the Indian Institute of Technology-Madras to test tools that are being developed to help identify and fix filing defects, and extract and organise data and metadata from case documents. They are expected to be integrated into the Court's e-filing system and its case management platform, the Integrated Case Management & Information System (ICMIS). In parallel, an AI-based system called the Supreme Court Portal Assistance in Court Efficiency (SUPACE) is under experimental development. It is designed to help map the factual background of cases, enable more targeted searches for relevant precedents, and assist in identifying similar matters. The system is currently being tested by the Supreme Court and is not part of the judicial decision-making process. Global cues That said, the U.S. presents a more permissive, but increasingly cautious, approach. Judges there are already experimenting with AI for tasks like summarising briefs, drafting routine orders, and managing caseloads. Yet this openness has been tempered by a series of high-profile missteps. In one widely cited incident, lawyers submitted filings that included entirely fabricated case citations generated by an AI tool, an episode that has since become shorthand for the risks of "hallucination", where systems produce plausible but false information. Courts responded with sanctions, fines, and a wave of new guidelines requiring lawyers to verify AI-assisted work. Individual judges and jurisdictions have also introduced disclosure requirements, mandating that attorneys declare whether AI tools were used in preparing submissions. Across the Atlantic, the approach in the U.K., and Europe has focused less on bans and more on embedding safeguards. Judicial bodies have warned explicitly that reliance on AI-generated content, especially unverified legal citations, could undermine the integrity of proceedings and expose lawyers to professional or even criminal consequences. At a broader level, frameworks emerging from institutions like the Council of Europe emphasise that any use of AI in justice systems must align with human rights, due process, and accountability. The emphasis here is not just on what AI can do, but on what it should be allowed to do within a constitutional order. Common doctrine taking shape globally Effectively, AI around the world is increasingly accepted for administrative and preparatory functions like transcription, document management, legal research, and analytics. But when it comes to core judicial functions such as interpreting the law, weighing evidence, determining guilt or liability, human judgment remains non-negotiable. This caution is rooted in more than just institutional conservatism. AI systems, particularly large language models, are prone to confident errors. Hallucinated case law is the most visible manifestation, but the risks run deeper. Subtle biases in training data can skew outputs in ways that are hard to detect. Summaries may omit nuance or misrepresent testimony. Even seemingly benign assistance, like drafting a judgment, can introduce framing effects that shape how a judge thinks about a case. The opacity of many AI systems compounds the problem: if a model cannot explain how it arrived at a conclusion, it becomes nearly impossible to scrutinise or appeal that reasoning. There is also a more structural concern. The legitimacy of courts rests not just on correct outcomes, but on the perception of fairness and accountability. Delegating any part of judicial reasoning to machines risks eroding that legitimacy, especially in societies where trust in institutions is already fragile. If a litigant believes that an algorithm influenced a verdict, even indirectly, the burden of proof shifts in ways that legal systems are not designed to handle. The challenge, then, is not whether to use AI, but how to contain it. Indian courts are certainly showing the way.
Share
Share
Copy Link
Courts globally are drawing strict boundaries around artificial intelligence use, welcoming it for administrative tasks but firmly rejecting its role in judicial decision-making. India's Gujarat High Court recently banned AI from drafting judgments, while the UK's Geoffrey Vos sparked debate by suggesting AI might handle mechanical cases. The divide highlights tensions between efficiency and the fundamental right to human judgment.
Courts worldwide are establishing firm boundaries around artificial intelligence use in legal systems, accepting it as a tool for efficiency while categorically rejecting its involvement in judicial decision-making
1
2
. The emerging consensus prioritizes human accountability in justice over technological convenience, even as AI in courts promises to address massive case backlogs and resource constraints. This month, the Gujarat High Court issued sweeping guidelines barring judges and court staff from using AI at any stage of judicial decision-making, including AI in judgment drafting or preparing orders2
. The policy explicitly prohibits reliance on automated tools for sentencing or final verdicts, citing concerns about bias and AI-generated hallucinations that could undermine the integrity of human conscience in law.
Source: The Conversation
India has taken some of the most explicit positions globally on restricting artificial intelligence in legal proceedings. The Kerala High Court banned AI tools in district court decision-making last July, while the Punjab and Haryana High Court restricted their use in both judgment writing and legal research this month
2
. Even India's Supreme Court, which has embraced AI for transcribing Constitution Bench proceedings, keeps these tools firmly outside judicial reasoning. The court is testing SUPACE, an experimental system designed to map case backgrounds and identify precedents, but it remains excluded from the decision-making process2
. This cautious approach persists despite the Government of India allocating ₹7210 crore for the e-Courts Phase III project, with ₹53.57 crore earmarked specifically for integrating AI and Blockchain technologies across High Courts.
Source: CXOToday
The UK's second most senior judge, Geoffrey Vos, has proposed a more permissive approach, suggesting AI might handle what he calls "broadly mechanical decisions" like pension calculations, benefits determinations, or personal injury damages
1
. In a November 2024 speech, Vos described a "spectrum" of legal decisions that AI might make or help make, arguing such use would save money and time. However, he called for serious debate about whether this would violate essential human rights and urged that technology be used responsibly in legal systems and processes. Several jurisdictions are already testing this approach. Estonia uses a semi-automated small-claims system for monetary claims up to €7,000, with human clerks overseeing the process1
. Frankfurt District Court in Germany tested an AI system named Frauke for air passenger rights lawsuits, while Taiwan piloted AI tools to generate complete draft rulings for Driving Under Influence cases.Legal scholars argue that categorizing cases as simple or complex to justify algorithmic adjudication is both legally and morally dangerous
1
. What counts as a "simple, routine or mechanical" case is itself a human decision, and disputes over compensation or benefits may appear straightforward yet carry significant consequences. Allocating such cases for algorithmic handling risks creating a two-tier justice system where some citizens present their case before a human judge while others are handled by machines. The right to a human judge represents what some scholars consider a fundamental principle of justice, encompassing not just decisions but a holistic process including the right to be heard, present defense, and weigh competing narratives1
. Generative AI cannot recognize suffering, credibility, remorse, or vulnerability like humans, making it fundamentally unfit for judicial roles.Related Stories
The Gujarat High Court's policy mandates strict human oversight protocols, requiring qualified officers to review and verify any AI-generated output before it is acted upon, filed, or communicated
2
. Officers must read original documents before acting on AI summaries, machine-translated text must be verified by someone competent in the source language, and AI-assisted transcriptions must be certified before becoming formal records. Violations attract departmental or disciplinary proceedings under applicable service rules. These safeguards address real risks: in the U.S., lawyers submitted filings with entirely fabricated case citations generated by AI tools, resulting in legal sanctions, fines, and new disclosure requirements2
. This incident has become shorthand for hallucination risks, where systems produce plausible but false information.While courts reject AI in core judicial functions, they increasingly embrace it for case management and administrative tasks. The Gujarat policy authorizes technology for administrative efficiency and legal research, provided all outputs are verified by human personnel
2
. India's Supreme Court partnered with IIT-Madras to develop tools that identify filing defects and organize case metadata for integration into the e-filing system and Integrated Case Management & Information System2
. UK guidelines suggest judges might use AI for preparatory work like drafting summaries of long documents, translating legal materials, identifying precedents, or enhancing document readability1
. This distinction between administrative support and decision-making authority reflects courts' determination to preserve transparency, due process, and accountability while leveraging technology to address procedural delays and limited resources that plague overburdened judicial systems worldwide.Summarized by
Navi
[1]
19 Jun 2025•Technology

03 Mar 2026•Policy and Regulation

22 Jul 2025•Policy and Regulation

1
Technology

2
Science and Research

3
Science and Research
