Librarians face mounting burden as AI chatbots flood them with requests for fake citations

2 Sources

Share

Librarians report that AI chatbots like ChatGPT and Gemini are generating fabricated book titles, journal articles, and archival records that don't exist. About 15% of reference questions at the Library of Virginia now include AI hallucinations, forcing staff to spend hours proving materials don't exist while users increasingly trust AI over human experts.

AI Chatbots Generate Fabricated References at Alarming Scale

Librarians across institutions are reporting an exhausting new challenge: AI chatbots generating fake citations for books, journal articles, and archival records that simply don't exist. Sarah Falls, chief of researcher engagement at the Library of Virginia, estimates that approximately 15% of all emailed reference questions her team receives now include AI-generated content, with many containing requests for non-existent materials

1

. Tools like ChatGPT, Google Gemini, and Grok have become notorious for these AI hallucinations, creating what researchers are calling "AI slop" that wastes countless hours of professional time

2

.

Source: Futurism

Source: Futurism

The International Committee of the Red Cross issued a stark warning about this growing problem, cautioning that AI models generating fabricated articles and archival references are creating serious obstacles for legitimate research. "These systems do not conduct research, verify sources, or cross-check information," the ICRC stated, explaining that AI chatbots "generate new content based on statistical patterns, and may therefore produce invented catalogue numbers, descriptions of documents, or even references to platforms that have never existed"

2

. The humanitarian organization, which maintains extensive archives, felt compelled to clarify that when references cannot be found, it doesn't mean they're withholding information—increasingly, it means the citations are AI-fabricated citations

1

.

Trust in AI Over Human Experts Creates New Challenges

What troubles librarians most isn't just the volume of requests for non-existent materials—it's the public distrust that follows when they explain a record doesn't exist. Falls notes that many people simply don't believe human experts when told their sources are fabricated. "For our staff, it is much harder to prove that a unique record doesn't exist," she explained

2

. This trust in AI over human experts stems partly from the authoritative voice AI adopts. The technology speaks with confidence, leading users to question whether librarians might be hiding information rather than accepting that their chatbot made an error

1

.

Source: Gizmodo

Source: Gizmodo

One scholarly communications librarian on Bluesky described spending significant time hunting down citations for a student, only to discover after the third fruitless search that all the references came from Google's AI summary

2

. This burden on librarians extends beyond simple inconvenience—it fundamentally disrupts their ability to assist with legitimate reference questions and academic research. The ICRC emphasized that AI "cannot indicate that no information exists. Instead, they will invent details that appear plausible but have no basis in the archival record"

2

.

Real-World Impact on Academic Research and Information Literacy

The consequences extend well beyond library desks. A Chicago Sun-Times freelance writer used AI to generate a summer reading list with 15 book recommendations—ten of the non-existent books and journals didn't exist

1

. Robert F. Kennedy Jr.'s Make America Healthy Again commission released a report in May that NOTUS reporters later fact-checked, finding at least seven fabricated details among the citations

1

.

OpenAI released an agentic model in February designed for "deep research" at the level of a research analyst, claiming lower hallucination rates. However, OpenAI admitted the model struggled with separating "authoritative information from rumors" and conveying uncertainty

2

. Some users believe prompt engineering techniques—adding phrases like "don't hallucinate" to their prompts—will ensure accuracy, though if this actually worked, AI companies would likely implement it automatically

1

.

A researcher on Bluesky captured the cascading effect: "Because of the amount of slop being produced, finding records that you KNOW exist but can't necessarily easily find without searching, has made finding real records that much harder"

2

. The flood of AI-generated content isn't just creating non-existent sources—it's drowning out authentic, human-written materials, making legitimate research increasingly difficult. As AI continues to proliferate in academic settings without adequate safeguards, the information literacy crisis deepens, leaving librarians to manage the fallout while their expertise faces unprecedented skepticism.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo