3 Sources
3 Sources
[1]
India's top court angry after junior judge cites fake AI-generated orders
India's Supreme Court has threatened legal consequences after a judge was found to have adjudicated on a property dispute using fake judgements generated by artificial intelligence. The top court, which was responding to an appeal by the defendants, will now examine the ruling given by the lower court in the southern state of Andhra Pradesh. The Supreme Court called the case a matter of "institutional concern" and said fake AI-generated judgements had "a direct bearing on integrity of adjudicatory process". The incident has made headlines, becoming the latest in a series of instances where AI has disrupted court proceedings in India and elsewhere in the world The problems in the case in Andhra Pradesh arose in August last year when a junior civil judge in the trial court in Vijaywada city passed an order in a case about a disputed property. The court had previously assigned an official to survey the property and file a report, which the defendants in the case objected to. The judge dismissed their objection, citing four past legal judgements - all of which were later found to be AI-generated. AI programmes have vastly simplified tasks in the workplace but generative AI systems are known for their ability to "hallucinate" and assert falsehoods as fact, even sometimes inventing sources for the inaccurate information. The defendants challenged the order in the state's high court, pointing out that the cited orders were fake. The high court acknowledged this, but accepted that the junior civil judge had made the error in "good faith" and went on to agree with the trial court's decision anyway. In its order, the high court said that "the citations may be non-existent, but if the learned trial court has considered the correct principles of law and its application to the facts of the case is also correct, mere mentioning of incorrect or non-existent rulings/citations in the order cannot be a ground to set aside the order". The high court had also sought a report from the junior judge who had used the AI-generated rulings. She told the court that this was her first time using an AI tool and she had believed the citations to be "genuine". She had no intention to misquote or misrepresent the rulings and that "the mistake occurred solely due to the reliance on an automatic source", the high court wrote. The high court also advocated for the "exercise of actual intelligence over artificial intelligence". Following this, the defendants appealed again, taking the matter to the Supreme Court, which was less forgiving about the impact of AI. Coming down sternly against the fake judgements, the top court last Friday stayed the lower court's order on the property dispute. It said the use of AI while making judgements was not simply "an error in decision making" but an act of "misconduct". "This case assumes considerable institutional concern, not because of the decision that was taken on the merits of the case, but about the process of adjudication and determination," the top court said. The court said it would examine the case in more detail and issued notices to the country's Attorney and Solicitor General, as well as the Bar Council of India. In another case last month, the Supreme Court raised concerns over the trend of lawyers using AI tools to draft petitions. "It is absolutely uncalled for," legal news website LiveLaw quoted the court as saying. India is not alone in reckoning with the effects of AI in courts. In October, two federal judges in the US were called out for the use of AI tools which led to errors in their rulings. In June 2025, the High Court of England and Wales warned lawyers not to use AI-generated case material after a series of cases cited fictitious or partially made up rulings. India's legal institutions are grappling alongside others around the world with how to regulate and monitor the use of AI in the courtroom. Last year, the Supreme Court published a white paper on AI in India's judiciary, in which it listed best practices as well as guidelines for AI use by judicial institutions, lawyers and clerks. The court stressed the need for human oversight and the importance of keeping institutional safeguards "firmly in place".
[2]
SC takes cognisance of trial court relying on AI-generated 'fake' verdicts
New Delhi: Taking cognisance of a trial court relying on alleged non-existing verdicts that were generated with the help of artificial intelligence (AI), the Supreme Court has said a decision based on such fake judgments would not be an error in decision making but would amount to misconduct. A bench of Justices P S Narasimha and Alok Aradhe has said it will examine the matter in detail and issued a notice to Attorney General R Venkataramani, Solicitor General Tushar Mehta and the Bar Council of India. The court has also appointed senior advocate Shyam Divan to assist it in the matter. "We take cognisance of the trial court deploying AI-generated non-existing, fake or synthetic alleged judgments and seek to examine its consequences and accountability as it has a direct bearing on the integrity of the adjudicatory process," the bench said. "At the outset, we must declare that a decision based on such non-existent and fake alleged judgments is not an error in the decision making. It would be a misconduct and legal consequence shall follow. It is compelling that we examine this issue in more detail," the bench said in its February 27 order. The issue cropped up before the apex court while it was hearing a plea challenging a January order of the Andhra Pradesh High Court that came on a suit filed for an injunction. The top court said the case assumes considerable institutional concern, not because of the decision that was taken on the merits of the case, but regarding the process of adjudication and determination. "Issue notice to the attorney general, solicitor general and the Bar Council of India," it said. The court noted that pending the suit's disposal, the trial court had appointed an advocate-commissioner to note the physical features of the disputed property. The bench pointed out that the petitioners had challenged the advocate-commissioner's report by raising certain objections. It noted that the trial court, in its order passed in August last year, dismissed the objections and in the process, relied on certain judgments. The petitioners then challenged the trial court's order, contending that the verdicts referred to and relied on were non-existent and fake. The top court noted that the high court had considered the objection and realised that the judgments were AI-generated. It said after recording a word of caution, the high court had proceeded to decide the case on merits and dismissed the civil revision petition, affirming the decision of the trial court. The petitioners then moved the apex court, challenging the high court's order. The bench agreed to hear the plea and issued a notice on it. "Pending disposal of the special leave petition, we direct that the trial court shall not proceed on the basis of the advocate-commissioner's report," it said and posted the matter for hearing on March 10. Hearing a separate matter on February 17, a top court bench headed by Chief Justice Surya Kant expressed serious concern over a growing trend of lawyers filing petitions drafted with AI tools that contain non-existent judgments such as "Mercy vs Mankind". It made the observations while hearing a public interest litigation (PIL) matter seeking guidelines on political speeches.
[3]
SC Says Citing AI-Generated Fake Case Laws Is Misconduct
The Supreme Court (SC) has declared that judges citing AI-generated fake case laws commit "misconduct" that warrants "legal consequences", not merely an error in reasoning. This marks the first time India's apex court has escalated AI hallucinations from a technical problem to a conduct issue with disciplinary implications. A Bench of Justice PS Narasimha and Justice Alok Aradhe made the declaration on February 27 while taking suo motu cognizance of an Andhra Pradesh trial court order that relied on four fabricated judgments. "At the outset, we must declare that a decision based on such non-existent and fake alleged judgments is not an error in the decision-making process. It would be a misconduct and legal consequence shall follow," the order stated. The Court has sought responses from the Attorney General, Solicitor General, and the Bar Council of India, while appointing Senior Advocate Shyam Divan as amicus curiae. The matter will come up for hearing on March 10. The ruling comes amid a troubling pattern of Indian judges and lawyers citing fabricated precedents, a phenomenon in which AI tools like ChatGPT generate plausible-sounding but entirely fictitious case law. The controversy centers on an August 2025 order in a property dispute. The trial court dismissed the defendants' objections to a court-appointed commissioner's report by citing four SC judgments that do not exist. However, when the defendants approached the Andhra Pradesh High Court (HC), it acknowledged the judgments as "Artificial Intelligence (AI)-generated". Instead of addressing the systemic breach, the HC merely recorded "a word of caution" and affirmed the trial court's decision on merits. The SC found this approach inadequate. The Bench noted that "this case assumes considerable institutional concern, not because of the decision that was taken on the merits of the case, but about the process of adjudication and determination". Furthermore, the Court directed the trial court not to proceed on the basis of the commissioner's report pending its examination of accountability mechanisms. By labelling the citation of fake case laws as "misconduct" rather than an error, the SC has significantly escalated the consequences. However, Indian law prescribes substantially different penalties for judges and lawyers. For judges: impeachment remains nearly impossible: For judges, misconduct can trigger removal under Article 124(4) of the Constitution. However, Parliament can remove judges only through impeachment, which requires a majority of both Houses. This high bar makes removal nearly impossible. Alternatively, the Chief Justice can advise judges to resign voluntarily, retire, or stop assigning them judicial work. Notably, no judge has been successfully impeached in India's 75-year constitutional history. For lawyers: Bar Council can suspend or disbar: For lawyers, however, the Bar Council of India's Disciplinary Committee can impose more immediate consequences: reprimand, suspension from practice for a specified period, or removal from the roll. A suspended advocate loses the right to practice in any court across India. Additionally, the Bar Council's involvement signals that lawyers who submit AI-generated citations without verification could face disciplinary action. The Andhra Pradesh case does not stand alone. Rather, it represents the latest in a series of AI-related judicial failures. In December 2024, the Bengaluru bench of the Income Tax Appellate Tribunal issued an order in the Buckeye Trust case citing three SC judgments and one Madras HC ruling, none of which existed. The tribunal recalled the order within a week after sources indicated the tax department's representative had used ChatGPT without verification. The order involved a Rs 669 crore trust taxation dispute. In October 2025, the Bombay HC quashed a Rs 27.91 crore income tax assessment after discovering that the National Faceless Assessment Centre had relied on three non-existent judicial decisions. The Court observed: "In this era of Artificial Intelligence, one tends to place much reliance on the results thrown open by the system. However, when one is exercising quasi-judicial functions, such results are not to be blindly relied upon." Moreover, in January 2025, a trial court judge in Karnataka cited incorrect precedents generated by ChatGPT, with both parties stating they had never cited those cases. Meanwhile, a Punjab and Haryana HC judge publicly admitted to using ChatGPT to research bail jurisprudence bail jurisprudence. The SC intervened three months after its Centre for Research and Planning released a White Paper on Artificial Intelligence and the Judiciary in November 2025. However, the document outlined principles without establishing binding accountability frameworks. The White Paper explicitly warned about AI "hallucinations", where generative systems fabricate facts or court citations that appear legally credible but are false. It documented instances in which trial court judges relied on AI-generated material containing non-existent precedents, and noted that the Income Tax Appellate Tribunal recalled an order after discovering reliance on fictitious case law. Nevertheless, the judiciary has rapidly adopted AI tools. As of early 2026, the SC has deployed SUPACE (Supreme Court Portal for Assistance in Court Efficiency) for research assistance (still in experimental stage pending GPU infrastructure), SUVAS (Supreme Court Vidhik Anuvaad Software) has translated over 36,271 judgments into regional languages and TERES (Transcription of Electronic Record and Speech) provides real-time transcription during Constitution Bench hearings. Moreover, the SC's White Paper in November 2025 outlined how AI could address case backlogs and procedural delays, positioning these tools as assistive technology for court administration. However, as of July 2025, over 5.29 crore cases remain pending in Indian courts, including nearly 87,000 in the SC alone. The paradox emerges clearly: courts are rapidly adopting AI for efficiency while grappling with verification failures that undermine judicial integrity. The March 10 hearing will examine what disciplinary consequences should follow for citing fabricated case laws, whether courts should impose mandatory disclosure requirements when judges or lawyers use AI tools, and what verification protocols courts must adopt. As Justice Vikram Nath emphasized in September 2025: "A judge is not an algorithm. A judge is a human being guided by constitutional morality, empathy and lived experience. A machine cannot understand the anguish of a victim, the remorse of an accused, or the complexities of social context". What happens next? The Court will hear arguments from the Attorney General, Solicitor General, Bar Council of India, and amicus curiae Shyam Divan on March 10. The outcome could establish India's first comprehensive framework for AI accountability in the justice system.
Share
Share
Copy Link
India's Supreme Court has escalated AI hallucinations from technical errors to misconduct after a judge cited four fake AI-generated judgments in a property dispute. The ruling marks the first time the apex court has declared that using fabricated case laws warrants disciplinary action and legal consequences, not merely correction.
India's Supreme Court has declared that judges citing AI-generated fake case laws commit misconduct that warrants legal consequences, marking a significant escalation in how the country's legal institutions address AI use in the judiciary
3
. A bench of Justices P S Narasimha and Alok Aradhe stated on February 27 that "a decision based on such non-existent and fake alleged judgments is not an error in the decision making. It would be a misconduct and legal consequence shall follow"2
. The court has issued notices to the Attorney General, Solicitor General, and the Bar Council of India, while appointing senior advocate Shyam Divan to assist in examining the matter2
.
Source: MediaNama
The controversy centers on an August 2024 order by a junior civil judge in Vijaywada city, Andhra Pradesh, who dismissed objections in a property dispute by citing four past legal judgments—all later found to be AI-generated
1
. The trial court had appointed an advocate-commissioner to survey disputed property, and when defendants objected to the report, the judge relied on fabricated precedents to reject their concerns1
. When defendants challenged the order in the state's high court, pointing out the fake citations, the high court acknowledged the error but accepted it was made in "good faith" and upheld the decision anyway1
. The high court reasoned that "the citations may be non-existent, but if the learned trial court has considered the correct principles of law and its application to the facts of the case is also correct, mere mentioning of incorrect or non-existent rulings/citations in the order cannot be a ground to set aside the order"1
.The Supreme Court found this approach inadequate and stayed the lower court's order, declaring the case a matter of "institutional concern" because it directly affects the integrity of the adjudicatory process
1
. The court noted that "this case assumes considerable institutional concern, not because of the decision that was taken on the merits of the case, but about the process of adjudication and determination"3
. By labeling citing AI-generated fake case laws as misconduct rather than an error, the apex court has significantly escalated consequences and signaled that accountability of deploying AI tools in legal settings will be scrutinized3
. The matter is scheduled for detailed hearing on March 10, with the court directing that the trial court shall not proceed based on the advocate-commissioner's report pending disposal2
.
Source: ET
This Andhra Pradesh case represents the latest in a troubling series of incidents where AI hallucinations have disrupted court proceedings across India. In December 2024, the Bengaluru bench of the Income Tax Appellate Tribunal issued an order in the Buckeye Trust case citing three Supreme Court judgments and one Madras High Court ruling—none of which existed—in a Rs 669 crore trust taxation dispute
3
. The tribunal recalled the order within a week after sources indicated the tax department's representative had used ChatGPT without verification3
. In October 2024, the Bombay High Court quashed a Rs 27.91 crore income tax assessment after discovering that the National Faceless Assessment Centre had relied on three non-existent judicial decisions3
. The junior judge in the current case told the court this was her first time using an AI tool and she believed the citations to be "genuine," stating that "the mistake occurred solely due to the reliance on an automatic source"1
.Related Stories
The Supreme Court's declaration that fake AI-generated verdicts constitute misconduct carries substantially different penalties depending on whether judges or lawyers are involved. For judges, misconduct can trigger removal under Article 124(4) of the Constitution, but Parliament can remove judges only through impeachment requiring a majority of both Houses—a bar so high that no judge has been successfully impeached in India's 75-year constitutional history
3
. Alternatively, the Chief Justice can advise judges to resign voluntarily, retire, or stop assigning them judicial work3
. For lawyers, however, the Bar Council of India's Disciplinary Committee can impose more immediate disciplinary actions including reprimand, suspension from practice for a specified period, or removal from the roll, with a suspended advocate losing the right to practice in any court across India3
.The ruling comes three months after the Supreme Court's Centre for Research and Planning released a White Paper on Artificial Intelligence and the Judiciary in November 2024, which outlined principles for AI use in the judiciary but did not establish binding accountability frameworks
3
. The document explicitly warned about AI hallucinations, where generative systems fabricate facts or court citations that appear legally credible but are false3
. The White Paper stressed the need for human oversight and the importance of keeping institutional safeguards "firmly in place"1
. In another case last month, the Supreme Court raised concerns over the trend of lawyers using AI tools to draft petitions, with legal news website LiveLaw quoting the court as saying "it is absolutely uncalled for"1
. On February 17, a bench headed by Chief Justice Surya Kant expressed serious concern over lawyers filing petitions drafted with AI tools containing non-existent judgments such as "Mercy vs Mankind"2
. India's legal institutions are grappling alongside others worldwide with regulating AI use—in October 2024, two federal judges in the US were called out for AI tool errors in their rulings, while in June 2024, the High Court of England and Wales warned lawyers not to use AI-generated case material after a series of cases cited fictitious rulings1
.Summarized by
Navi
07 Jun 2025•Technology

15 Aug 2025•Technology

15 May 2025•Policy and Regulation

1
Policy and Regulation

2
Technology

3
Technology
