2 Sources
2 Sources
[1]
Librarians Aren't Hiding Secret Books From You That Only AI Knows About
Everyone knows that AI chatbots like ChatGPT, Grok, and Gemini can often hallucinate sources. But for the folks tasked with helping the public find books and journal articles, the fake AI bullshit is really taking its toll. Librarians sound absolutely exhausted by the requests for titles that don't exist, according to a new post from Scientific American. The magazine spoke with Sarah Falls, the chief of researcher engagement at the Library of Virginia, who estimates that about 15% of all emailed reference questions that they receive are generated by AI chatbots like ChatGPT. And the requests often include questions about fake citations. What's more, Falls suggests that people don't seem to believe librarians when they explain that a given record doesn't exist, a trend that's been reported elsewhere like 404 Media. Many people really believe their stupid chatbot over a human who specializes in finding reliable information day in and day out. A recent post from the International Committee of the Red Cross (ICRC) titled, "Important notice: AI generated archival reference," provides more evidence that librarians are just exhausted with it all. "If a reference cannot be found, this does not mean that the ICRC is withholding information. Various situations may explain this, including incomplete citations, documents preserved in other institutions, orâ€" increasinglyâ€"AI-generated hallucinations," the organization said. "In such cases, you may need to look into the administrative history of the reference to determine whether it corresponds to a genuine archival source." The year seems to have been filled with examples of fake books and journal articles created with AI. A freelance writer for the Chicago Sun-Times generated a summer reading list for the newspaper with 15 books to recommend. But ten of the books didn't exist. The first report from Health Secretary Robert F. Kennedy Jr.'s so-called Make America Healthy Again commission was released in May. A week later, reporters at NOTUS published their findings after going through all of the citations. At least seven didn't exist. You can't blame everything on AI. Papers have been retracted for giving fake citations since long before ChatGPT or any other chatbot came on the scene. Back in 2017, a professor at Middlesex University found at least 400 papers citing a non-existent research paper that was essentially the equivalent of filler text. The citation: Van der Geer, J., Hanraads, J.A.J., Lupton, R.A., 2010. The art of writing a scientific article. J Sci. Commun. 163 (2) 51-59. It's gibberish, of course. The citation seems to have been included in many lower quality papersâ€"likely due to laziness and sloppiness rather than an intent to deceive. But it's a safe bet that any authors of those pre-AI papers would have probably been embarrassed about their inclusion. The thing about AI tools is that too many humans have come to believe our chatbots are more trustworthy than humans. Why might users trust their AI over humans? For one thing, part of the magic trick that AI pulls is speaking in an authoritative voice. Who are you going to believe, the chatbot you're using all day or some random librarian on the phone? The other problem might have something to do with the fact that people develop what they believe are reliable tricks for making AI more reliable. Some people even think that adding things like "don't hallucinate" and "write clean code" to their prompt will make sure their AI only gives the highest quality output. If that actually worked, we imagine companies like Google and OpenAI would just add that to every prompt for you. If it does work, boy, have we got a lifehack for all the tech companies currently terrified of the AI bubble bursting.
[2]
Librarians Dumbfounded as People Keep Asking for Materials That Don't Exist
Librarians, and the books they cherish, are already fight a losing battle for our attention spans with all kinds of tech-enabled brainrot. Now, in a further assault to their sanity, AI models are generating so much slop that students and researchers keep coming into libraries and asking for journals, books, and records that don't exist, Scientific American reports. In a statement from the International Committee of the Red Cross spotted by the magazine, the humanitarian organization cautioned that AI chatbots like ChatGPT, Gemini, and Copilot are prone to generating fabricated archival references. "These systems do not conduct research, verify sources, or cross-check information," the ICRC, which maintains a vast library and archives, said in the warning. "They generate new content based on statistical patterns, and may therefore produce invented catalogue numbers, descriptions of documents, or even references to platforms that have never existed." Library of Virginia chief of researcher engagement Sarah Falls told SciAm that the AI inventions are wasting the time of librarians who are asked to hunt down nonexistent records. Fifteen percent of emailed reference questions that Fall's library receives, she claims, are now ChatGPT-generated, which include hallucinated primary source documents and published works. "For our staff, it is much harder to prove that a unique record doesn't exist," Falls added. Other librarians and researchers have spoken out about AI's effects on their profession. "This morning I spent time looking up citations for a student," wrote one user on Bluesky who identified themselves as a scholarly communications librarian. "By the time I got to the third (with zero results), I asked where they got the list, and the student admitted they were from Google's AI summary." "As a librarian who works with researchers," another wrote, "can confirm this is true." AI companies have put a heavy focus on creating powerful "reasoning" models aimed at researchers that can conduct a vast amount of research off a few prompts. OpenAI released its agentic model for conducting "deep research" in February, which it claims to do "at the level of a research analyst." At the time, OpenAI claimed it hallucinated at a lower rate than its other models, but admitted it struggled with separating "authoritative information from rumors," and conveying uncertainty when it presented the information. The ICRC warned about that pernicious flaw in its statement. AIs "cannot indicate that no information exists," it stated. "Instead, they will invent details that appear plausible but have no basis in the archival record." Though AI's hallucinatory habit is well known by now, and though no one in the AI industry has made particularly impressive progress in clamping down on it, the tech continues to run amok in academic research. Scientists and researchers, who you'd hope to be as empirical and skeptical as possible, are being caught left and right submitting papers filled with AI-fabricated citations. The field of AI research itself, ironically, is drowning in a flood of AI-written papers as some academics publish upwards of one hundred shoddily-written studies a year. Since nothing happens in a vacuum, the authentic, human-written sources and papers are now being drowned out. "Because of the amount of slop being produced, finding records that you KNOW exist but can't necessarily easily find without searching, has made finding real records that much harder," lamented a researcher on Bluesky.
Share
Share
Copy Link
Librarians report that AI chatbots like ChatGPT and Gemini are generating fabricated book titles, journal articles, and archival records that don't exist. About 15% of reference questions at the Library of Virginia now include AI hallucinations, forcing staff to spend hours proving materials don't exist while users increasingly trust AI over human experts.
Librarians across institutions are reporting an exhausting new challenge: AI chatbots generating fake citations for books, journal articles, and archival records that simply don't exist. Sarah Falls, chief of researcher engagement at the Library of Virginia, estimates that approximately 15% of all emailed reference questions her team receives now include AI-generated content, with many containing requests for non-existent materials
1
. Tools like ChatGPT, Google Gemini, and Grok have become notorious for these AI hallucinations, creating what researchers are calling "AI slop" that wastes countless hours of professional time2
.
Source: Futurism
The International Committee of the Red Cross issued a stark warning about this growing problem, cautioning that AI models generating fabricated articles and archival references are creating serious obstacles for legitimate research. "These systems do not conduct research, verify sources, or cross-check information," the ICRC stated, explaining that AI chatbots "generate new content based on statistical patterns, and may therefore produce invented catalogue numbers, descriptions of documents, or even references to platforms that have never existed"
2
. The humanitarian organization, which maintains extensive archives, felt compelled to clarify that when references cannot be found, it doesn't mean they're withholding information—increasingly, it means the citations are AI-fabricated citations1
.What troubles librarians most isn't just the volume of requests for non-existent materials—it's the public distrust that follows when they explain a record doesn't exist. Falls notes that many people simply don't believe human experts when told their sources are fabricated. "For our staff, it is much harder to prove that a unique record doesn't exist," she explained
2
. This trust in AI over human experts stems partly from the authoritative voice AI adopts. The technology speaks with confidence, leading users to question whether librarians might be hiding information rather than accepting that their chatbot made an error1
.
Source: Gizmodo
One scholarly communications librarian on Bluesky described spending significant time hunting down citations for a student, only to discover after the third fruitless search that all the references came from Google's AI summary
2
. This burden on librarians extends beyond simple inconvenience—it fundamentally disrupts their ability to assist with legitimate reference questions and academic research. The ICRC emphasized that AI "cannot indicate that no information exists. Instead, they will invent details that appear plausible but have no basis in the archival record"2
.Related Stories
The consequences extend well beyond library desks. A Chicago Sun-Times freelance writer used AI to generate a summer reading list with 15 book recommendations—ten of the non-existent books and journals didn't exist
1
. Robert F. Kennedy Jr.'s Make America Healthy Again commission released a report in May that NOTUS reporters later fact-checked, finding at least seven fabricated details among the citations1
.OpenAI released an agentic model in February designed for "deep research" at the level of a research analyst, claiming lower hallucination rates. However, OpenAI admitted the model struggled with separating "authoritative information from rumors" and conveying uncertainty
2
. Some users believe prompt engineering techniques—adding phrases like "don't hallucinate" to their prompts—will ensure accuracy, though if this actually worked, AI companies would likely implement it automatically1
.A researcher on Bluesky captured the cascading effect: "Because of the amount of slop being produced, finding records that you KNOW exist but can't necessarily easily find without searching, has made finding real records that much harder"
2
. The flood of AI-generated content isn't just creating non-existent sources—it's drowning out authentic, human-written materials, making legitimate research increasingly difficult. As AI continues to proliferate in academic settings without adequate safeguards, the information literacy crisis deepens, leaving librarians to manage the fallout while their expertise faces unprecedented skepticism.Summarized by
Navi
06 Feb 2025•Technology

05 May 2025•Technology

21 May 2025•Technology

1
Science and Research

2
Policy and Regulation

3
Technology
