6 Sources
[1]
South Africa's AI policy cited fake research, created by AI: what lessons need to be learned
South Africa's first attempt to establish a binding artificial intelligence (AI) policy framework came to an abrupt halt just 16 days after it was officially gazetted. On 10 April, the Department of Communications and Digital Technologies published the Draft South Africa National Artificial Intelligence Policy for public comment. Journalists checked the references and found that they contained fabrications. These fell into two categories: academic journals that do not exist; and real journals in which the referenced research articles were never published. Such fabrications are typical of a known generative AI problem called hallucination. Withdrawing the draft, the communications minister was frank: the problem was not a technical glitch but a failure of oversight. Generative AI was used without proper human verification of the sources, compromising the credibility and integrity of the document. Read more: AI policies in Africa: lessons from Ghana and Rwanda Much of the public commentary has treated this as an embarrassment: the policy meant to govern AI was itself undermined by AI. As a senior lecturer in cyber law, including the regulation of AI, I argue that framing this episode as an embarrassment obscures what needs to be examined. It misses the main point of what is at stake. The hallucinated citations reveal two specific failures. Epistemic integrity (the assurance that research has been conducted through reliable, ethical and repeatable methods that any reader could verify) was absent. So was information integrity (the public's reasonable expectation that information from an authoritative source can be trusted). The policy was not equipped to govern either of these failures, and has now itself demonstrated both. This matters because generative AI can be harmful, and its harms are not limited to fake references, but also include fake images, fake videos, fake voices, and the weaponisation of people's likenesses through deepfakes. What is AI hallucination? Hallucinations are a known problem of generative AI, the category of AI that produces text and images (audio and visual media) through tools like ChatGPT and Grok. Read more: What are AI hallucinations? Why AIs sometimes make things up Hallucinations happen when an AI system, in trying to fulfil a task, produces content that sounds convincing but is inaccurate or entirely fabricated. They are a growing problem: In universities, academics have been found listing fake AI-generated sources. In courts in various countries, and in South Africa, lawyers have submitted non-existent sources in their pleadings. There are many examples of such cases. In documents, such as the retracted AI policy. The hallucination did not just invent sources. It manufactured seemingly credible African scholarly authority. Highly respected authors' names were cast in a false light. It also attributed false evidence to real institutions that are recognised as authoritative publishers of academic papers. What now? South Africa's policy was based on responsible AI governance. Responsible AI needs accountability, transparency, and explainability. These are non-negotiable conditions, echoed by the Organisation for Economic Co-operation and Development principles and the Smart Africa AI Blueprint that the policy draws on. Read more: AI in Africa: 5 issues that must be tackled for digital equality These governance principles are not just for AI system designers. They bind any institution that uses AI, including use in the production of public documents. The policy failed all three in its own production. The department has some serious questions to answer on all these fronts. 1. Accountability This is an opportunity for the department to gain the trust of South Africans and demonstrate resilient and responsible governance in action. Accountability calls for a comprehensive explanation of the extent to which the non-existent sources have affected the policy. The department should not proceed to revision without meeting the standards that the revised policy will propose for others. 2. Transparency Transparency demands disclosure. Which sections of the policy are materially affected by the fake sources? Which tool was used? By whom? At which stage of drafting or compiling public submissions did they enter the policy? Was AI used to generate the literature review, the founding values, the synthesis of public comments, or all the above? The department has not told us. 3. Explainability Explainability demands that we can trace reasoning. The hallucinated sources appear in the reference list, but without a full disclosure from the department, the public cannot know which parts of the policy they were used to support, or how deeply they shaped its foundational priorities and values. The public comment sections, by contrast, have a verifiable record of where the information came from. Read more: One in three South Africans have never heard of AI - what this means for policy Explainability requires that we can trace what shaped the normative framework of the policy. Without a section-by-section review that informs the public which parts of the policy were affected and to what extent, by the policy's own standards, the department will have failed both the transparency and explainability requirements. What needs to change The retracted policy rightly recognised AI as a tool for inclusive economic growth, capacity development and human rights protection. It also acknowledged that it is a "point of departure" and that sector-specific approaches will be needed. What must change is how generative AI is treated, both in the production of policy documents and in the mandates the policy creates for synthetic media, such as deepfakes. These are not problems to be sorted out later at sector level. They are public trust cross-cutting challenges that require their own regulatory logic and governance mechanisms built on cross-sectoral cooperation. The revised policy must incorporate them as a structural pillar, not as a subcategory of innovation governance, but as a problem the state is already living with. Read more: Deepfakes and South African law: remedies on paper, gaps in practice This means designating a specific mandate holder for synthetic media and information integrity. Existing regulatory bodies already hold overlapping jurisdiction over digital content, identity harms, and information distribution. What is missing is an agreed framework on definitions, remedies, and the steps to be taken when generative AI is used to spread misinformation and disinformation through fake sources and synthetic media. Mandating that is not a question of creating new institutions. It is a question of political will and policy design. Acknowledgements: After drafting this article myself, I used Claude to improve the readability of the piece. I personally drafted, verified and reviewed all the substance and sources referenced in it. I take full responsibility for the contents of this article.
[2]
AI Hallucinations Put South Africa on the Spot
South Africa's Democratic Alliance party has extolled the need to adopt modern technology to boost government efficiency since joining the ruling coalition as the second-biggest party in 2024. That enthusiasm, while well placed given the moribund nature of many South African state departments, has now come back to bite it. Two of its ministers have been embarrassed by AI "hallucinations" appearing in official policy documents in the past week. On Thursday, Home Affairs Minister Leon Schreiber suspended two senior officials after references in a cabinet-approved policy document on immigration were flagged as artificial intelligence hallucinations. Just four days earlier, Communications Minister Solly Malatsi was forced to withdraw a policy paper released for public comment after News24, a local news website, reported that it contained numerous fictitious sources in its reference list. Ironically, it was a draft artificial intelligence policy. "This should not have happened," Malatsi said. "It's a lesson we take with humility." The Next Africa newsletter runs every weekday. Sign up here for the newsletter, and subscribe to the Next Africa podcast on Apple, Spotify or anywhere you listen.
[3]
South Africa yanks AI policy after AI-assisted drafting inve
Eish shame man! Maybe you shouldn't ask AI to set the rules for AI use? South Africa has pulled its draft national AI policy after discovering that it was citing sources that exist only in the fertile imagination of a chatbot. The country's Department of Communications and Digital Technologies confirmed over the weekend that the draft, which had already cleared Cabinet and was out for public comment, included "various fictitious sources" in its reference list. Communications minister Solly Malatsi said the department rechecked the draft after reports flagged fake references and found some citations were indeed made up, prompting its withdrawal. "This failure is not a mere technical issue but has compromised the integrity and credibility of the draft policy," he said in a post on X, adding that AI-generated citations appear to have slipped in without anyone checking them. The document has now been yanked, and Malatsi said that those involved in drafting and sign-off can expect "consequence management." "This unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical. It's a lesson we take with humility," Malatsi said. "I want to reassure the country that we are treating this matter with the gravity it deserves." The now-defunct policy was sold as a forward-looking framework, full of talk about "intergenerational equity" and AI benefiting current and future generations. It's now best known for a references section that doesn't hold up. Local outlet News24 had reported that at least six references in the report were fabricated, with experts saying that the errors matched classic AI hallucinations: convincing on the surface, entirely made up underneath. Following the publication of News24's report, Khusela Sangoni-Diko, chair of the parliamentary portfolio committee overseeing the department, publicly told Malatsi to pull the document before it caused further embarrassment. She also suggested that the redraft skip "using ChatGPT this time," adding that the government should stop looking for a scapegoat, or "scape-bot." All in all, it's a great look for a government trying to set the rules on AI when its own policy can't clear a basic fact check. And it's not exactly a one-off either. As The Register reported last year, Deloitte had to help clean up a government report in Australia after AI-generated citations and even a made-up court quote slipped through, a reminder that letting the machine do the writing is one thing, checking it is another. South Africa has now learned that lesson the hard way. When your national AI policy cannot tell real sources from imaginary ones, it is probably not ready to regulate anyone else's machines. ®
[4]
South Africa withdraws AI policy due to fake AI-generated sources
JOHANNESBURG, April 27 (Reuters) - South Africa has withdrawn its first draft national AI policy after revelations that it contained fictitious sources in its reference list which appeared to have been AI-generated. "The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened," Minister of Communications and Digital Technologies Solly Malatsi said. "This failure is not a mere technical issue but has compromised the integrity and credibility of the draft policy," he wrote in a post on X on Sunday. The policy, unveiled this month for public comment before finalization, sought to position South Africa as a continental leader in AI innovation while addressing ethical, social and economic challenges. It outlined plans to establish new institutions, including a National AI Commission, an AI Ethics Board and an AI Regulatory Authority, and to create incentives such as tax breaks, grants, and subsidies to encourage private-sector collaboration. Malatsi said there would be consequences for those responsible for drafting the policy, and did not say when a new one would be released. "This unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical. It's a lesson we take with humility," he wrote. Reporting by Nellie Peyton; Editing by Alison Williams Our Standards: The Thomson Reuters Trust Principles., opens new tab
[5]
South Africa withdraws AI policy filled with AI hallucinations
South Africa's ambitions to become a continental leader in artificial intelligence have run into a deeply awkward obstacle: the country's draft national AI policy had to be withdrawn after it was found to contain fictitious, apparently AI-generated citations. Reuters reported that the document was nearing finalization in parliament when fabricated references were discovered in its source list. Solly Malatsi, South Africa's Minister of Communications and Digital Technologies, announced the withdrawal in a statement posted to X on April 26, calling the lapse a direct compromise of the policy's integrity. This Tweet is currently unavailable. It might be loading or has been removed. "This failure is not a mere technical issue but has compromised the integrity and credibility of the draft policy," Malatsi wrote. "The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened. In fact, this unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical." AI hallucinations remain a stubborn and largely intractable problem with language models, as Mashable has reported repeatedly. Phony citations have been a particular problem in legal documents, and a growing number of U.S. lawyers have been busted and reprimanded for submitting AI-generated legal briefs riddled with hallucinations. An online legal hallucination database maintained by lawyer and data scientist Damien Charlotin has found more than 900 such cases in the United States alone (and four in South Africa, not including the latest debacle). The withdrawn policy had outlined the establishment of a national AI commission, an ethics board, and a regulatory body, alongside tax incentives, grants, and subsidies to stimulate private-sector investment. South Africa's stated goal, per Reuters, was to position itself as Africa's leading hub for AI innovation. Malatsi's statement did not indicate a timeline for when a revised draft will be produced.
[6]
South Africa withdraws AI policy due to fake AI-generated sources
South Africa's initial national AI policy has been shelved following alarming findings that it included fictitious citations, likely produced through AI technologies. Minister Solly Malatsi voiced his concerns about the authenticity of the draft, stressing the critical need for human supervision in AI deployment. The individuals accountable for these discrepancies are facing potential sanctions. South Africa has withdrawn its first draft national AI policy after revelations that it contained fictitious sources in its reference list which appeared to have been AI-generated. "The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened," Minister of Communications and Digital Technologies Solly Malatsi said. "This failure is not a mere technical issue but has compromised the integrity and credibility of the draft policy," he wrote in a post on X on Sunday. The policy, unveiled this month for public comment before finalization, sought to position South Africa as a continental leader in AI innovation while addressing ethical, social and economic challenges. It outlined plans to establish new institutions, including a National AI Commission, an AI Ethics Board and an AI Regulatory Authority, and to create incentives such as tax breaks, grants, and subsidies to encourage private-sector collaboration. Malatsi said there would be consequences for those responsible for drafting the policy, and did not say when a new one would be released. "This unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical. It's a lesson we take with humility," he wrote.
Share
Copy Link
South Africa's first national AI policy was pulled just 16 days after publication when journalists discovered fictitious sources in its reference list—a classic case of AI hallucinations. Communications Minister Solly Malatsi called it a failure of oversight, not a technical glitch, compromising the document's credibility and proving why vigilant human oversight over AI is critical.
South Africa's ambitious attempt to position itself as a continental leader in artificial intelligence came to an abrupt halt when the country's draft national AI policy was withdrawn just 16 days after its official publication
1
. On April 10, the Department of Communications and Digital Technologies published the Draft South Africa National Artificial Intelligence Policy for public comment, but journalists quickly discovered that the document contained fabricated references—a telltale sign of AI hallucinations3
.
Source: Mashable
The fictitious research citations fell into two categories: academic journals that do not exist, and real journals in which the referenced research articles were never published
1
. News24, a local news website, reported that at least six references in the document were fabricated, with experts confirming that the errors matched classic generative AI problems—convincing on the surface, entirely made up underneath3
.
Source: The Conversation
Communications Minister Solly Malatsi was frank in his assessment of what went wrong. "This failure is not a mere technical issue but has compromised the integrity and credibility of the draft policy," he wrote in a post on X
4
. The minister acknowledged that the most plausible explanation was that fake AI-generated sources were included without proper verification, stating emphatically: "This should not have happened"2
.
Source: Reuters
Malatsi emphasized that this unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical, adding that it was "a lesson we take with humility"
5
. The minister indicated that those involved in drafting and sign-off can expect "consequence management," though he did not specify when a revised policy would be released3
.The withdrawn policy had outlined ambitious plans to establish new institutions, including a National AI Commission, an AI Ethics Board, and an AI Regulatory Authority
4
. It also proposed creating incentives such as tax breaks, grants, and subsidies to encourage private-sector collaboration and stimulate investment in AI innovation5
.The document was sold as a forward-looking framework built on principles of responsible AI governance, including accountability, transparency, and explainability—non-negotiable conditions echoed by the Organisation for Economic Co-operation and Development principles and the Smart Africa AI Blueprint
1
. Ironically, South Africa's AI policy failed to meet all three of these standards in its own production.The compromised policy integrity raises serious questions about transparency and accountability in AI in governmental processes. The Department of Communications and Digital Technologies has not disclosed which sections of the policy are materially affected by the fabricated sources, which chatbot or generative AI tool was used, by whom, or at which stage of drafting the hallucinated citations entered the document
1
.The hallucinated sources did not just invent references—they manufactured seemingly credible African scholarly authority, casting highly respected authors' names in a false light and attributing false evidence to real institutions recognized as authoritative publishers of academic papers
1
. This undermines both information integrity and credibility, the public's reasonable expectation that information from an authoritative source can be trusted.Related Stories
This incident is not isolated within South Africa's government. Just four days after the policy withdrawal, Home Affairs Minister Leon Schreiber suspended two senior officials after references in a cabinet-approved policy document on immigration were also flagged as AI hallucinations
2
. Both ministers belong to the Democratic Alliance party, which has championed the adoption of modern technology to boost government efficiency since joining the ruling coalition as the second-biggest party in 20242
.Khusela Sangoni-Diko, chair of the parliamentary portfolio committee overseeing the department, publicly told Malatsi to pull the document before it caused further embarrassment, suggesting that the redraft skip "using ChatGPT this time" and that the government should stop looking for a scapegoat, or "scape-bot"
3
.Phony citations generated by AI have become a particular problem in legal documents worldwide. An online legal hallucination database maintained by lawyer and data scientist Damien Charlotin has found more than 900 such cases in the United States alone, with four previously documented in South Africa
5
. A growing number of U.S. lawyers have been reprimanded for submitting AI-generated legal briefs riddled with hallucinations, while in Australia, Deloitte had to help clean up a government report after AI-generated citations and even a made-up court quote slipped through3
.The South African case underscores that the harms of generative AI extend beyond fake references to include fake images, fake videos, fake voices, and the weaponization of people's likenesses through deepfakes and disinformation
1
. When a national AI policy cannot tell real sources from imaginary ones, it raises fundamental questions about readiness to regulate AI governance more broadly3
.Summarized by
Navi
[1]
[2]
[3]
10 Apr 2026•Policy and Regulation

30 Oct 2025•Policy and Regulation

22 Mar 2025•Technology

1
Entertainment and Society

2
Policy and Regulation

3
Technology
