19 Sources
19 Sources
[1]
Deloitte will refund Australian government for AI hallucination-filled report
The Australian Financial Review reports that Deloitte Australia will offer the Australian government a partial refund for a report that was littered with AI-hallucinated quotes and references to nonexistent research. Deloitte's "Targeted Compliance Framework Assurance Review" was finalized in July and published by Australia's Department of Employment and Workplace Relations (DEWR) in August (Internet Archive version of the original). The report, which cost Australian taxpayers nearly $440,000 AUD (about $290,000 USD), focuses on the technical framework the government uses to automate penalties under the country's welfare system. Shortly after the report was published, though, Sydney University Deputy Director of Health Law Chris Rudge noticed citations to multiple papers and publications that did not exist. That included multiple references to nonexistent reports by Lisa Burton Crawford, a real professor at the University of Sydney law school. "It is concerning to see research attributed to me in this way," Crawford told the AFR in August. "I would like to see an explanation from Deloitte as to how the citations were generated." "A small number of corrections" Deloitte and the DEWR buried that explanation in an updated version of the original report published Friday "to address a small number of corrections to references and footnotes," according to the DEWR website. On page 58 of that 273-page updated report, Deloitte added a reference to "a generative AI large language model (Azure OpenAI GPT-4o) based tool chain" that was used as part of the technical workstream to help "[assess] whether system code state can be mapped to business requirements and compliance needs." Of the 141 sources cited in an extensive "Reference List" in the original report, only 127 appear in the updated report. In addition to the now-deleted references to fake publications from Crawford and other academics, the updated report also removed a fabricated quote attributed to an actual ruling from federal justice Jennifer Davies (spelled as "Davis" in the original report). Deloitte Australia said it will repay the final installment of its contract with the government, though it's unclear which portion of the total contract that represents. A spokesperson for DEWR told the AFR that "the substance of the independent review is retained, and there are no changes to the recommendations." But Sydney University's Rudge told AFR that "you cannot trust the recommendations when the very foundation of the report is built on a flawed, originally undisclosed, and non-expert methodology... Deloitte has admitted to using generative AI for a core analytical task; but it failed to disclose this in the first place."
[2]
Deloitte goes all in on AI -- despite having to issue a hefty refund for use of AI | TechCrunch
Professional services and consultant firm Deloitte announced a landmark AI enterprise deal with Anthropic the same day it was revealed the company would issue a refund for a government-contracted report that contained inaccurate AI-produced slop. The upshot: Deloitte's deal with Anthropic is a referendum on its commitment to AI, even as it grapples with the technology. And Deloitte is not alone in this challenge. The timing of this announcement is interesting -- comical even. On the same day Deloitte touted its increased use of AI, the Australia Department of Employment and Workplace Relations said the consulting company would have to issue a refund for a report it did for the department that included AI hallucinations, the Financial Times reported. The department had commissioned a A$439,000 "independent assurance review" from Deloitte, which was published earlier this year. The Australian Financial Review reported in August the review had a number of errors, including multiple citations to non-existent academic reports. A corrected version of the review was uploaded to the department's website last week. Deloitte will repay the final installment of its government contract, the FT reported. TechCrunch reached out to Deloitte for comment and will update the article if the company responds. Deloitte announced Monday plans roll out Anthropic's chatbot Claude to its nearly 500,000 global employees on Monday. Deloitte and Anthropic, which formed a partnership last year, plan to create compliance products and features for regulated industries including financial services, healthcare and public services, according to an Anthropic blog post. Deloitte also plans to create different AI agent "personas" to represent the different departments within the company including accountants and software developers, according to reporting from CNBC. "Deloitte is making this significant investment in Anthropic's AI platform because our approach to responsible AI is very aligned, and together we can reshape how enterprises operate over the next decade. Claude continues to be a leading choice for many clients and our own AI transformation," Ranjit Bawa, global technology and ecosystems and alliances leader, at Deloitte wrote in the blog post. The financial terms of the deal -- which Anthropic referred to as an alliance -- were not disclosed. The deal is not only Anthropic's largest enterprise deployment yet, it also illustrates how AI is embedding itself in every aspect of modern life from tools used at work to casual queries made at home. Deloitte is not the only company, or individual, getting caught using inaccurate AI-produced information in recent months either. In May, the Chicago Sun-Times newspaper had to admit that it ran an AI-generated list of books for its annual summer reading list after readers discovered some of the book titles were hallucinated even if the authors were real. An internal document viewed by Business Insider showed Amazon's AI productivity tool, Q Business, struggled with accuracy in its first year. Anthropic itself has also been knocked for using AI-hallucinated information from its own chatbot Claude. The AI research lab's lawyer apologized after the company used an AI-generated citation in a legal dispute with music publishers earlier this year.
[3]
Deloitte Refunds Portion of $440K Report Over AI Hallucinations
Emily is an experienced reporter who covers cutting-edge tech, from AI and EVs to brain implants. She stays grounded by hiking and playing guitar. Don't miss out on our latest stories. Add PCMag as a preferred source on Google. Deloitte recently signed a deal with Anthropic to give all of its employees access to the Claude chatbot and dramatically expand the use of AI across its business, but the consulting firm might want to keep a close eye on what Claude is churning out. The Australian government recently ordered Deloitte to refund a portion of the $440,000 it was paid for a compliance report after discovering several errors, The Guardian reports, most of which were phony citations. A Deloitte spokesperson said the company "confirmed some footnotes and references were incorrect," and Australian officials re-uploaded a corrected version. Ironically, the report's methodology says its risk assessment process involved using "Azure OpenAI GPT-4o." AI hallucinations have cropped up in multiple professional publications over the past few years, from courtrooms to classrooms. It does not appear that Claude was involved in the Australia-Deloitte snafu, but the Anthropic deal is the chatbot maker's largest enterprise deployment. It's already working with Novo Nordisk, Cox automotive, and Salesforce, according to a blog post. Deloitte has over 470,000 employees across 150 countries who will, theoretically, now be turning to AI on projects with their over 30,000 customers. The company plans to create customized versions of Claude, called "personas," tailored for different job functions. An accountant may have a different version than, say, a software developer. Deloitte is also launching a Claude Center of Excellence to help teams adopt the chatbot and answer questions. In theory, AI chatbots could help the consulting industry boost productivity, access more information, and more rapidly assemble reports. It's why rival firms like McKinsey are equipping its consultants with AI agents that can whip up PowerPoints, take notes, and create documents for clients, The Wall Street Journal reports. But in practice, it's a mixed bag. AI is already starting to automate entry-level jobs, Harvard Business Review reports. Senior consultants are also struggling to advise their clients on how to navigate thorny questions around AI adoption, the Journal reports. Providing clients with inaccurate, AI-generated reports probably doesn't help.
[4]
Deloitte refunds Australian government over AI in report
Big Four consultancy billed Canberra top dollar, only for investigators to find bits written by a chatbot Deloitte has agreed to refund part of an Australian government contract after admitting it used generative AI to produce a report riddled with fake citations, phantom footnotes, and even a made-up quote from a Federal Court judgment. The consulting giant confirmed it would repay the final installment of its AU$440,000 ($291,245) agreement with Australia's Department of Employment and Workplace Relations (DEWR) after the department re-uploaded a corrected version of the report late last week - conveniently timed for the weekend. The updated version strips out more than a dozen bogus references and footnotes, rewrites text, and fixes assorted typos, although officials insist the "substance" of the report remains intact. The work, commissioned last December, involved the Targeted Compliance Framework - the government's IT-driven system for penalizing welfare recipients who miss obligations such as job search appointments. When the report was first published in July, University of Sydney academic Dr Christopher Rudge spotted multiple fabrications, prompting Deloitte to investigate. Rudge initially suggested the errors might be the handiwork of a chatbot prone to "hallucinations," a suspicion Deloitte declined to confirm at the time. Now the new version [PDF] contains a disclosure in its methodology section: Deloitte used "a generative AI large language model (Azure OpenAI GPT-4o) based tool chain licensed by DEWR and hosted on DEWR's Azure tenancy" to fill "traceability and documentation gaps." In plain English, the bot helped with analysis and cross-referencing. Rudge called it a "confession," arguing the consultancy had admitted to using AI for "a core analytical task" but failed to disclose this up front. "You cannot trust the recommendations when the very foundation of the report is built on a flawed, originally undisclosed, and non-expert methodology," he told the Australian Financial Review (AFR). Deloitte, which has long touted its AI prowess and sells training on "responsible AI" to clients, is now dealing with the embarrassment of having its own report exposed as AI-tainted. Globally, the firm generates more than $70 billion annually, and an increasing portion of that comes from advising on AI ethics, policy, and deployment - not from accidentally quoting nonexistent academics. Among the deleted material were citations to imaginary reports by University of Sydney law professor Lisa Burton Crawford and Lund University's Björn Regnell, as well as a fictional extract from the robo-debt case Amato v Commonwealth. Even "Justice Davis" - apparently meant to be Justice Jennifer Davies - made a cameo in the original version, along with a fabricated quote from nonexistent paragraphs 25 and 26 of her judgment. Despite all this, DEWR insists the "substance of the independent review is retained, and there are no changes to the recommendations." A source told the AFR that Deloitte's internal review blamed the gaffes on human error rather than AI misuse - though, given the list of hallucinated references, that defense may test the limits of human credulity. For a Big Four firm that charges a premium for its rigor, the incident may have a few anxious partners quietly checking whether their PowerPoint decks contain phantom professors. The takeaway for public sector clients is clear: if you're paying a Big Four firm hundreds of thousands for expert analysis, you might want to make sure the expert isn't a chatbot with a taste for creative writing. Deloitte did not respond to requests for comment. ®
[5]
Deloitte issues refund for error-ridden government report that used AI
Deloitte will partially refund payment for an Australian government report that contained multiple errors after admitting it was partly produced by artificial intelligence. The Big Four accountancy and consultancy firm will repay the final instalment of its government contract after conceding that some footnotes and references it contained were incorrect, Australia's Department of Employment and Workplace Relations said on Monday. The department had commissioned a A$439,000 "independent assurance review" from Deloitte in December last year to help assess problems with a welfare system for automatically penalising jobseekers. The Deloitte review was first published earlier this year, but a corrected version was uploaded on Friday to the departmental website. In late August the Australian Financial Review reported that the document contained multiple errors, including references and citations to non-existent reports by academics at the universities of Sydney and Lund in Sweden. The substance of the review and its recommendations had not changed, the Australian government added. The contract will be made public once the transaction is completed, it said. The embarrassing episode underscores the dangers posed to consultancies by using AI technology, particularly the danger of "hallucinations". The Big Four consulting firms, as well as strategy houses such as McKinsey, have poured billions of pounds into AI research and development in a bid to keep nimble smaller competitors at bay. They hope to use the technology to accelerate the speed at which they can provide advice and audits to clients. The UK accountancy regulator warned in June that the Big Four firms were failing to keep track of how automated tools and AI affected the quality of their audits, even as firms escalate their use of the technology to perform risk assessments and obtain evidence. In the updated version of the report, Deloitte added references to the use of generative AI in its appendix. It states that a part of the report "included the use of a generative artificial intelligence (AI) large language model (Azure OpenAI GPT -- 4o) based tool chain" licensed by the government department. While Deloitte did not state that AI caused the mistakes in its original report, it admitted that the updated version corrected errors with citations, references, and one summary of legal proceedings. "The updates made in no way impact or affect the substantive content, findings and recommendations in the report," Deloitte stated in the amended version. Deloitte Australia said: "The matter has been resolved directly with the client."
[6]
Deloitte to partially refund Australian government for report with apparent AI-generated errors
MELBOURNE, Australia (AP) -- Deloitte Australia will partially refund the 440,000 Australian dollars ($290,000) paid by the Australian government for a report that was littered with apparent AI-generated errors, including a fabricated quote from a federal court judgment and references to nonexistent academic research papers. The financial services firm's report to the Department of Employment and Workplace Relations was originally published on the department's website in July. A revised version was published Friday after Chris Rudge, a Sydney University researcher of health and welfare law, said he alerted the media that the report was "full of fabricated references." Deloitte had reviewed the 237-page report and "confirmed some footnotes and references were incorrect," the department said in a statement Tuesday. "Deloitte had agreed to repay the final instalment under its contract," the department said. The amount will be made public after the refund is reimbursed. Asked to comment on the report's inaccuracies, Deloitte told The Associated Press in a statement the "matter has been resolved directly with the client." Deloitte did not respond when asked if the errors were generated by AI. A tendency for generative AI systems to fabricate information is known as hallucination. The report reviewed departmental IT systems' use of automated penalties in Australia's welfare system. The department said the "substance" of the report had been maintained and there were no changes to its recommendations. The revised version included a disclosure that a generative AI language system, Azure OpenAI, was used in writing the report. Quotes attributed to a federal court judge were removed, as well as references to nonexistent reports attributed to law and software engineering experts. Rudge said he found up to 20 errors in the first version of the report. The first error that jumped out at him wrongly stated that Lisa Burton Crawford, a Sydney University professor of public and constitutional law, had written a nonexistent book with a title suggesting it was outside her field of expertise. "I instantaneously knew it was either hallucinated by AI or the world's best kept secret because I'd never heard of the book and it sounded preposterous," Rudge said. Work by his academic colleagues had been used as "tokens of legitimacy," cited by the report's authors but not read, Rudge said, addding that he considered misquoting a judge was a more serious error in a report that was effectively an audit of the department's legal compliance. "They've totally misquoted a court case then made up a quotation from a judge and I thought, well hang on: that's actually a bit bigger than academics' egos. That's about misstating the law to the Australian government in a report that they rely on. So I thought it was important to stand up for diligence," Rudge said. Senator Barbara Pocock, the Australian Greens party's spokesperson on the public sector, said Deloitte should refund the entire AU$440,000 ($290,000). Deloitte "misused AI and used it very inappropriately: misquoted a judge, used references that are non-existent," Pocock told Australian Broadcasting Corp. "I mean, the kinds of things that a first-year university student would be in deep trouble for."
[7]
Deloitte admits AI hallucinated quotes in government report, offers partial refund
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. Facepalm: Deloitte Australia has agreed to refund part of its fee to the federal government after admitting that a $440,000 welfare compliance review contained fabricated citations and quotes generated by artificial intelligence. The admission followed a discovery by a University of Sydney academic who found multiple references to research that did not exist. The incident underscores a growing debate over the use of AI systems in professional consultancy work, particularly when those tools generate references or conclusions without human verification. While Deloitte maintains that the policy recommendations remain valid, the incident has drawn attention to the transparency requirements associated with AI-assisted analysis in government contracts. The Targeted Compliance Framework Assurance Review, completed in July and commissioned by the Department of Employment and Workplace Relations (DEWR), examined the government's automated penalty system for welfare violations. Deloitte's work was intended to assess whether system code matched business requirements and policy compliance standards. It was published in August, but within weeks, academics flagged serious problems. Chris Rudge, deputy director of health law at the University of Sydney, found citations to numerous nonexistent articles, including reports attributed to Lisa Burton Crawford, a law professor at the same institution who confirmed she had never authored the works in question. "It is concerning to see research attributed to me in this way," Crawford told the Australian Financial Review. "I would like to see an explanation from Deloitte as to how the citations were generated." Buried on page 58 was a new disclosure that Deloitte's technical team had used "a generative AI large language model (Azure OpenAI GPT-4o) based tool chain" during its analysis. The DEWR released an updated version of the 273-page review on Friday, noting it had made "a small number of corrections to references and footnotes." Buried on page 58 was a new disclosure that Deloitte's technical team had used "a generative AI large language model (Azure OpenAI GPT-4o) based tool chain" during its analysis. The model was applied to determine whether the welfare system's software code could be mapped to compliance rules - an application that, in this case, produced inaccurate references. The revised report also removed a fabricated judicial quote originally attributed to Federal Justice Jennifer Davies, whose name had been misspelled as "Davis" in Deloitte's initial publication. Of the 141 works listed in the original reference section, only 127 appear in the updated document. References to fictitious publications by Crawford and other academics were deleted without further explanation. Deloitte told the AFR that it would return the final instalment of its government payment, though neither party clarified what percentage of the total fee this covered. A DEWR spokesperson said, "The substance of the independent review is retained, and there are no changes to the recommendations." Rudge, however, questioned the integrity of those recommendations. "You cannot trust the recommendations when the very foundation of the report is built on a flawed, originally undisclosed, and non-expert methodology," he said. "Deloitte has admitted to using generative AI for a core analytical task; but it failed to disclose this in the first place."
[8]
Deloitte forced to refund Aussie government after admitting it used AI to produce error-strewn report
It highlights the need for greater transparency surrounding AI use Deloitte has admitted to using generative AI to produce a report for the Australian government without implementing sufficient safeguards, landing the company in a whole load of trouble. The report included fake citations, false footnotes and a made-up court quote, among other blunders, landing Deloitte with a not-so-insignificant voluntary penalty. Deloitte has agreed to repay the final installment of its AU$440,000 agreement with the government's Department of Employment and Workplace Relations (DEWR), with the agency fixing the mistakes. DEWR was forced to re-upload the report, removing over a dozen false references, fixing typos and rewriting some sections. Although the agency asserts the core messages remain unchanged, the previously "final" report clearly needed some major revisions. Fake citations from academics Lisa Burton Crawford and Björn Regnell were removed, as well as an extract from the Amato vs Commonwealth case attributed to a person who doesn't actually exist. The updated document now discloses the use of GPT-4o, and while using GenAI by itself isn't a problem, this case does highlight the need for transparency. This is from a consulting firm that advocates responsible AI. "The updates made in no way impact or affect the substantive content, findings and recommendations in the report," the updated report now reads. The University of Sydney's Dr Christopher Rudge is credited with identifying the use of GenAI. "You cannot trust the recommendations when the very foundation of the report is built on a flawed, originally undisclosed, and non-expert methodology," he added (via the Australian Financial Review). Deloitte says that "the matter has been resolved directly with the client," according to the Financial Times. TechRadar Pro has reached out for further context.
[9]
Deloitte to pay money back to Albanese government after using AI in $440,000 report
Partial refund to be issued after several errors were found in a report into a department's compliance framework Deloitte will provide a partial refund to the federal government over a $440,000 report that contained several errors, after admitting it used generative artificial intelligence to help produce it. The Department of Employment and Workplace Relations (DEWR) confirmed Deloitte would repay the final instalment under its contract, which will be made public after the transaction is finalised. It comes as one Labor senator accused the consultancy firm of having a "human intelligence problem". Deloitte was commissioned by the department to review the targeted compliance framework and its IT system, used to automate penalties in the welfare system if mutual obligations weren't met by jobseekers, in December 2024. The subsequent report found widespread issues, including a lack of "traceability" between the rules of the framework and the legislation behind it, as well as "system defects". It said an IT system was "driven by punitive assumptions of participant non-compliance". It was first published on 4 July. It was re-uploaded to the DEWR website on Friday, after the Australian Financial Review in August reported that multiple errors had been found, including nonexistent references and citations. University of Sydney academic, Dr Christopher Rudge, who first highlighted the errors, said the report contained "hallucinations" where AI models may fill in gaps, misinterpret data, or try to guess answers. "Instead of just substituting one hallucinated fake reference for a new 'real' reference, they've substituted the fake hallucinated references and in the new version, there's like five, six or seven or eight in their place," he said. "So what that suggests is that the original claim made in the body of the report wasn't based on any one particular evidentiary source." The updated review noted a "small number of corrections to references and footnotes", but the department has said there have been no changes to the review's recommendations. "Deloitte conducted the independent assurance review and has confirmed some footnotes and references were incorrect," a spokesperson for the department said. "The substance of the independent review is retained, and there are no changes to the recommendations." In the updated version of the report, Deloitte added reference to the use of generative AI in its appendix. It states that a part of the report "included the use of a generative artificial intelligence (AI) large language model (Azure OpenAI GPT - 4o) based tool chain licensed by DEWR and hosted on DEWR's Azure tenancy." Deloitte did not state that artificial intelligence was the reason behind the errors in its original report. It also stood by the original findings of the review. "The updates made in no way impact or affect the substantive content, findings and recommendations in the report," it stated in the amended version. A spokesperson for Deloitte said "the matter has been resolved directly with the client". Rudge said that, despite his criticism, he hesitates to say the whole report should be "regarded as illegitimate", because the conclusions concur with other widespread evidence. Labor senator Deborah O'Neill, who was on a senate inquiry into the integrity of consulting firms, said it looked like "AI is being left to do the heavy lifting". "Deloitte has a human intelligence problem. This would be laughable if it wasn't so lamentable. A partial refund looks like a partial apology for substandard work," she said. "Anyone looking to contract these firms should be asking exactly who is doing the work they are paying for, and having that expertise and no AI use verified. "Perhaps instead of a big consulting firm, procurers would be better off signing up for a ChatGPT subscription." The AFR found several incorrect references in the original report, including nonexistent reports by professors at the University of Sydney and the Lund University in Sweden. The paper also reported a made up reference to a court decision in a robodebt case, Deanna Amato v Commonwealth. Deloitte wrote in it's final report that the update "amend[ed] the summary of the Amato proceeding which contained errors".
[10]
Deloitte was caught using AI in $290,000 report to help the Australian government crack down on welfare after a researcher flagged hallucinations | Fortune
Deloitte's member firm in Australia will pay the government a partial refund for a $290,000 report that contained alleged AI-generated errors, including references to non-existent academic research papers and a fabricated quote from a federal court judgment. The report was originally published on the Australian government's Department of Employment and Workplace Relations website in July. A revised version was quietly published on Friday after Sydney University researcher of health and welfare law Chris Rudge said he alerted media outlets that the report was "full of fabricated references." Deloitte reviewed the 237-page report and "confirmed some footnotes and references were incorrect," the department said in a statement Tuesday. Deloitte did not immediately respond to Fortune's request for comment. The revised version of the report includes a disclosure that a generative AI language system, Azure OpenAI, was used in its creation. It also removes the fabricated quotes attributed to a federal court judge and references to nonexistent reports attributed to law and software engineering experts. Deloitte noted in a "Report Update" section that the updated version, dated September 26, replaced the report published in July. "The updates made in no way impact or affect the substantive content, findings and recommendations in the report," Deloitte wrote. In late August the Australian Financial Review first reported that the document contained multiple errors, citing Rudge as the researcher who identified the apparent AI-generated inaccuracies. Rudge discovered the report's mistakes when he read a portion incorrectly stating Lisa Burton Crawford, a Sydney University professor of public and constitutional law, had authored a non-existent book with a title outside her field of expertise. "I instantaneously knew it was either hallucinated by AI or the world's best kept secret because I'd never heard of the book and it sounded preposterous," Rudge told The Associated Press on Tuesday. The Big Four consulting firms and global management firms such as McKinsey have invested hundreds of millions of dollars into AI initiatives to develop proprietary models and increase efficiency. In September, Deloitte said it would invest $3 billion in generative AI development through fiscal year 2030. Anthropic also announced a Deloitte partnership on Monday that includes making Claude available to more than 470,000 Deloitte professionals. In June, the UK Financial Reporting Council, an accountancy regulator, warned that the Big Four firms were failing to monitor how AI and automated technologies affected the quality of their audits. Though the firm will refund its last payment installment to the Australian government, Senator Barbara Pocock, the Australian Greens party's spokesperson on the public sector, said Deloitte should refund the entire $290,000. Deloitte "misused AI and used it very inappropriately: misquoted a judge, used references that are non-existent," Pocock told Australian Broadcasting Corp. "I mean, the kinds of things that a first-year university student would be in deep trouble for.""The matter has been resolved directly with the client," a spokesperson from Deloitte Australia told TheAssociated Press.
[11]
Consultants Forced to Pay Money Back After Getting Caught Using AI for Expensive "Report"
Financial consulting firm Deloitte was forced to reissue the Australian government $291,000 US after getting caught using AI and including hallucinated numbers in a recent report. As The Guardian reports, Australia's Department of Employment and Workplace Relations (DEWR) confirmed that the firm agreed to repay the final installment as part of its contract. It had been commissioned in December to review a system that automates penalties in the welfare system in case jobseekers don't meet their mutual obligations. However, the "independent assurance review" bore concerning signs that Deloitte had cut corners, and included multiple errors such as references to nonexistent citations -- a hallmark of AI slop. The "hallucinations" once again highlight how generative AI use in the workplace can allow glaring mistakes to slip through, from lawyers getting caught citing nonexistent cases to Trump's Centers for Disease Control referencing a study that was dreamed up by AI earlier this year. Deloitte, among other consulting firms, have poured billions of dollars into developing AI tools that they say could speed up their audits, as the Financial Times reports. Earlier today, the newspaper noted that the United Kingdom's six largest accounting firms hadn't been formally monitoring how AI impacts the quality of their audits, highlighting the possibility that many other reports may include similar hallucinations. University of Sydney sociological lecturer Christopher Rudge, who first highlighted the issues with Deloitte's DEWR report, said that the company tried to cover its tracks after sharing an updated version of the error-laden report. "Instead of just substituting one hallucinated fake reference for a new 'real' reference, they've substituted the fake hallucinated references and in the new version, there's like five, six or seven or eight in their place," he told The Guardian. "So what that suggests is that the original claim made in the body of the report wasn't based on any one particular evidentiary source." Despite being caught red-handed using AI to generate hallucinated citations, Deloitte said that the overall thrust of its guidance hadn't changed. A footnote in the revised version noted that staffers had used OpenAI's GPT-4o for the report. "Deloitte conducted the independent assurance review and has confirmed some footnotes and references were incorrect," a spokesperson told The Guardian. "The substance of the independent review is retained, and there are no changes to the recommendations." But outraged lawmakers calling for more oversight. "Deloitte has a human intelligence problem," Labor senator Deborah O'Neill, who represents New South Wales, told the Australian Financial Review. "This would be laughable if it wasn't so lamentable... too often, as our parliamentary inquiries have shown, these consulting firms win contracts by promising their expertise, and then when the deal is signed, they give you whatever [staff] costs them the least." "Anyone looking to contract these firms should be asking exactly who is doing the work they are paying for, and having that expertise and no AI use verified," O'Neill added. "Otherwise, perhaps instead of a big consulting firm procurers would be better off signing up for a ChatGPT subscription." "This report was meant to help expose the failures in our welfare system and ensure fair treatment for income support recipients, but instead Labor [is] letting Deloitte take them for a ride," Greens senator Penny Allman-Payne told the AFR. "Labor should be insisting on a full refund from Deloitte, and they need to stop outsourcing their decisions to their consultant mates."
[12]
Deloitte to refund Australian government after AI hallucinations found in report
The financial services firm's report to the Department of Employment and Workplace Relations was originally published on the department's website in July. A revised version was published Friday after Chris Rudge, a Sydney University researcher of health and welfare law, said he alerted the media that the report was "full of fabricated references." Deloitte had reviewed the 237-page report and "confirmed some footnotes and references were incorrect," the department said in a statement Tuesday. "Deloitte had agreed to repay the final instalment under its contract," the department said. The amount will be made public after the refund is reimbursed.
[13]
Deloitte's A$440,000 'Human Intelligence Problem' | AIM
On Monday, Deloitte made two contrasting headlines. One, it announced a major AI deal with Anthropic to roll out Claude across its global workforce of nearly half a million people. Two, it agreed to issue a refund to the Australian government for an AI-assisted report filled with fabricated references and hallucinated citations which was published in July. Both the developments were reported the same day. On paper, the partnership with Anthropic marks Deloitte's biggest AI push yet. But the refund story undercuts it with perfect irony. As the company pushes deeper into AI, it is also being reminded of why AI still needs human supervision. The Australian Department of Employment and Workplace Relations confirmed that Deloitte's A$439,000 (US$290,000) "independent assurance re
[14]
Deloitte repays almost $98,000 of its $440,000 fee for AI-error report
Gift 5 articles to anyone you choose each month when you subscribe. Deloitte Australia has repaid almost $98,000, or more than 20 per cent, of the $440,000 fee it charged the federal government for a report that had to be reissued due to artificial intelligence-related errors. The big four consulting firm also instructed the team that produced the report for the Department of Employment and Workplace Relations (DEWR) to undertake additional training on how to responsibly use AI and properly review material produced by the technology.
[15]
Deloitte to partially refund Australia for report with apparent AI-generated errors
MELBOURNE, Australia -- MELBOURNE, Australia (AP) -- Deloitte Australia will partially refund the 440,000 Australian dollars ($290,000) paid by the Australian government for a report that was littered with apparent AI-generated errors, including a fabricated quote from a federal court judgment and references to nonexistent academic research papers. The financial services firm's report to the Department of Employment and Workplace Relations was originally published on the department's website in July. A revised version was published Friday after Chris Rudge, a Sydney University researcher of health and welfare law, said he alerted the media that the report was "full of fabricated references." Deloitte had reviewed the 237-page report and "confirmed some footnotes and references were incorrect," the department said in a statement Tuesday. "Deloitte had agreed to repay the final instalment under its contract," the department said. The amount will be made public after the refund is reimbursed. Asked to comment on the report's inaccuracies, Deloitte told The Associated Press in a statement the "matter has been resolved directly with the client." Deloitte did not respond when asked if the errors were generated by AI. A tendency for generative AI systems to fabricate information is known as hallucination. The report reviewed departmental IT systems' use of automated penalties in Australia's welfare system. The department said the "substance" of the report had been maintained and there were no changes to its recommendations. The revised version included a disclosure that a generative AI language system, Azure OpenAI, was used in writing the report. Quotes attributed to a federal court judge were removed, as well as references to nonexistent reports attributed to law and software engineering experts. Rudge said he found up to 20 errors in the first version of the report. The first error that jumped out at him wrongly stated that Lisa Burton Crawford, a Sydney University professor of public and constitutional law, had written a nonexistent book with a title suggesting it was outside her field of expertise. "I instantaneously knew it was either hallucinated by AI or the world's best kept secret because I'd never heard of the book and it sounded preposterous," Rudge said. Work by his academic colleagues had been used as "tokens of legitimacy," cited by the report's authors but not read, Rudge said, addding that he considered misquoting a judge was a more serious error in a report that was effectively an audit of the department's legal compliance. "They've totally misquoted a court case then made up a quotation from a judge and I thought, well hang on: that's actually a bit bigger than academics' egos. That's about misstating the law to the Australian government in a report that they rely on. So I thought it was important to stand up for diligence," Rudge said. Senator Barbara Pocock, the Australian Greens party's spokesperson on the public sector, said Deloitte should refund the entire AU$440,000 ($290,000). Deloitte "misused AI and used it very inappropriately: misquoted a judge, used references that are non-existent," Pocock told Australian Broadcasting Corp. "I mean, the kinds of things that a first-year university student would be in deep trouble for."
[16]
Deloitte to partially refund Australian government for report with apparent AI-generated errors
MELBOURNE, Australia (AP) -- Deloitte Australia will partially refund the 440,000 Australian dollars ($290,000) paid by the Australian government for a report that was littered with apparent AI-generated errors, including a fabricated quote from a federal court judgment and references to nonexistent academic research papers. The financial services firm's report to the Department of Employment and Workplace Relations was originally published on the department's website in July. A revised version was published Friday after Chris Rudge, a Sydney University researcher of health and welfare law, said he alerted the media that the report was "full of fabricated references." Deloitte had reviewed the 237-page report and "confirmed some footnotes and references were incorrect," the department said in a statement Tuesday. "Deloitte had agreed to repay the final instalment under its contract," the department said. The amount will be made public after the refund is reimbursed. Asked to comment on the report's inaccuracies, Deloitte told The Associated Press in a statement the "matter has been resolved directly with the client." Deloitte did not respond when asked if the errors were generated by AI. A tendency for generative AI systems to fabricate information is known as hallucination. The report reviewed departmental IT systems' use of automated penalties in Australia's welfare system. The department said the "substance" of the report had been maintained and there were no changes to its recommendations. The revised version included a disclosure that a generative AI language system, Azure OpenAI, was used in writing the report. Quotes attributed to a federal court judge were removed, as well as references to nonexistent reports attributed to law and software engineering experts. Rudge said he found up to 20 errors in the first version of the report. The first error that jumped out at him wrongly stated that Lisa Burton Crawford, a Sydney University professor of public and constitutional law, had written a nonexistent book with a title suggesting it was outside her field of expertise. "I instantaneously knew it was either hallucinated by AI or the world's best kept secret because I'd never heard of the book and it sounded preposterous," Rudge said. Work by his academic colleagues had been used as "tokens of legitimacy," cited by the report's authors but not read, Rudge said, addding that he considered misquoting a judge was a more serious error in a report that was effectively an audit of the department's legal compliance. "They've totally misquoted a court case then made up a quotation from a judge and I thought, well hang on: that's actually a bit bigger than academics' egos. That's about misstating the law to the Australian government in a report that they rely on. So I thought it was important to stand up for diligence," Rudge said. Senator Barbara Pocock, the Australian Greens party's spokesperson on the public sector, said Deloitte should refund the entire AU$440,000 ($290,000). Deloitte "misused AI and used it very inappropriately: misquoted a judge, used references that are non-existent," Pocock told Australian Broadcasting Corp. "I mean, the kinds of things that a first-year university student would be in deep trouble for."
[17]
Deloitte's botched AI report is now a headache for the big four
Gift 5 articles to anyone you choose each month when you subscribe. Demands are growing for the Albanese government to require consultants to disclose when they use artificial intelligence in their work after Deloitte was forced to admit a report for which it charged a department $440,000 included errors introduced by the technology. Greens senator Barbara Pocock, who helped lead parliament's inquiry into PwC's tax leaks scandal, said government procurement rules should require external advisers to declare when they have used AI and verify any AI-generated material.
[18]
The cautionary lessons of Deloitte's AI sloppiness
Gift 5 articles to anyone you choose each month when you subscribe. Just when the big professional services firms were hoping to normalise relations with their political stakeholders in Canberra, Deloitte Australia has been forced to partially repay a federal government fee for an error-riddled report partly written using artificial intelligence. The Department of Employment and Workplace Relations paid Deloitte $440,000 to conduct an independent review of the IT systems used to automate penalties in the welfare system. The initial version of the report included three non-existent academic references and a fictitious quote from a Federal Court judge.
[19]
Deloitte to partially refund Australian government for report with apparent AI-generated errors
MELBOURNE, Australia -- Deloitte Australia will partially refund the 440,000 Australian dollars (US$290,000) paid by the Australian government for a report that was littered with apparent AI-generated errors, including a fabricated quote from a federal court judgment and references to nonexistent academic research papers. The financial services firm's report to the Department of Employment and Workplace Relations was originally published on the department's website in July. A revised version was published Friday after Chris Rudge, a Sydney University researcher of health and welfare law, said he alerted the media that the report was "full of fabricated references." Deloitte had reviewed the 237-page report and "confirmed some footnotes and references were incorrect," the department said in a statement Tuesday. "Deloitte had agreed to repay the final instalment under its contract," the department said. The amount will be made public after the refund is reimbursed. Asked to comment on the report's inaccuracies, Deloitte told The Associated Press in a statement the "matter has been resolved directly with the client." Deloitte did not respond when asked if the errors were generated by AI. A tendency for generative AI systems to fabricate information is known as hallucination. The report reviewed departmental IT systems' use of automated penalties in Australia's welfare system. The department said the "substance" of the report had been maintained and there were no changes to its recommendations. The revised version included a disclosure that a generative AI language system, Azure OpenAI, was used in writing the report. Quotes attributed to a federal court judge were removed, as well as references to nonexistent reports attributed to law and software engineering experts. Rudge said he found up to 20 errors in the first version of the report. The first error that jumped out at him wrongly stated that Lisa Burton Crawford, a Sydney University professor of public and constitutional law, had written a nonexistent book with a title suggesting it was outside her field of expertise. "I instantaneously knew it was either hallucinated by AI or the world's best kept secret because I'd never heard of the book and it sounded preposterous," Rudge said. Work by his academic colleagues had been used as "tokens of legitimacy," cited by the report's authors but not read, Rudge said, addding that he considered misquoting a judge was a more serious error in a report that was effectively an audit of the department's legal compliance. "They've totally misquoted a court case then made up a quotation from a judge and I thought, well hang on: that's actually a bit bigger than academics' egos. That's about misstating the law to the Australian government in a report that they rely on. So I thought it was important to stand up for diligence," Rudge said. Senator Barbara Pocock, the Australian Greens party's spokesperson on the public sector, said Deloitte should refund the entire AU$440,000 ($290,000). Deloitte "misused AI and used it very inappropriately: misquoted a judge, used references that are non-existent," Pocock told Australian Broadcasting Corp. "I mean, the kinds of things that a first-year university student would be in deep trouble for."
Share
Share
Copy Link
Deloitte agrees to partially refund a $440,000 AUD report commissioned by the Australian government after admitting to using AI, which resulted in multiple errors and fabricated citations. The incident raises concerns about AI use in professional services.
Deloitte Australia has agreed to partially refund the Australian government for a report that was found to contain multiple errors and fabricated citations, attributed to the use of artificial intelligence (AI)
1
. The consulting giant will repay the final installment of its AU$440,000 ($291,245) contract with the Department of Employment and Workplace Relations (DEWR)4
.Source: TechCrunch
The report, titled 'Targeted Compliance Framework Assurance Review,' was commissioned in December 2024 and published in August 2025. It focused on the technical framework used by the Australian government to automate penalties in the country's welfare system
1
. Shortly after publication, Dr. Christopher Rudge from the University of Sydney noticed multiple fabrications, including citations to non-existent academic reports and a fake quote attributed to a Federal Court judgment4
.In the updated version of the report, Deloitte disclosed the use of 'a generative AI large language model (Azure OpenAI GPT-4o) based tool chain' in its methodology
1
. This admission came after initial speculation about AI involvement, which Deloitte had initially declined to confirm4
.Source: Ars Technica
The corrected version of the report, uploaded to the DEWR website, removed more than a dozen bogus references and footnotes, rewrote text, and fixed various typos
4
. Despite these changes, DEWR insists that the 'substance of the independent review is retained, and there are no changes to the recommendations'3
.This incident highlights the potential risks of using AI in professional services. Ironically, it coincided with Deloitte's announcement of a landmark AI enterprise deal with Anthropic, planning to roll out the chatbot Claude to its nearly 500,000 global employees
2
.Source: The Register
Related Stories
Deloitte is not alone in facing challenges with AI-generated content. Other instances of AI hallucinations have been reported in various sectors, including journalism and legal proceedings
2
. This incident serves as a cautionary tale for the consulting industry, which is increasingly turning to AI to boost productivity and streamline operations3
.As AI continues to embed itself in various aspects of professional services, companies will need to develop robust mechanisms to ensure the accuracy and reliability of AI-generated content. The incident also raises questions about transparency in AI usage and the potential need for new regulatory frameworks to govern AI applications in high-stakes professional environments
5
.Summarized by
Navi
[2]
[4]
[5]
21 Aug 2025•Business and Economy
15 Aug 2025•Technology
29 Oct 2024•Business and Economy