Librarians are facing mounting challenges as artificial intelligence (AI) tools increasingly generate fictitious book and article references. A new report from *Scientific American* highlights the growing frustration among librarians who are inundated with inquiries about non-existent titles, with an estimated 15% of all emailed reference questions at the Library of Virginia stemming from AI-generated content.

Sarah Falls, the Chief of Researcher Engagement at the Library of Virginia, noted that these requests often include questions about fabricated citations. This trend has fostered a troubling dynamic where individuals appear to trust AI chatbots over the expertise of seasoned librarians. In a statement, Falls emphasized that when librarians clarify that a cited work does not exist, many patrons remain unconvinced.

AI Hallucinations Impacting Information Retrieval

The issue of AI-generated misinformation isn’t confined to one institution. The International Committee of the Red Cross (ICRC) recently issued a notice addressing this phenomenon. They stated, “If a reference cannot be found, this does not mean that the ICRC is withholding information. Various situations may explain this, including incomplete citations, documents preserved in other institutions, or— increasingly—AI-generated hallucinations.” This statement underscores the growing concern regarding the reliability of AI outputs.

Instances of AI fabricating references have become more common. In May, the Make America Healthy Again commission, led by Health Secretary Robert F. Kennedy Jr., released a report filled with erroneous citations. Journalists from NOTUS discovered that at least seven of these citations were completely fabricated. Similarly, a freelance writer for the *Chicago Sun-Times* published a summer reading list that included ten books that do not exist.

Longstanding Issues with Citation Integrity

While AI is a notable contributor to the current challenges, misinformation in academic citations is not a new issue. A study conducted in 2017 by a professor at Middlesex University revealed that over 400 scholarly papers cited a fictional research work, which was essentially nonsensical filler text. The prevalence of such citations, often due to carelessness rather than deceit, raises concerns about the integrity of academic research even before the advent of AI.

The situation reflects a broader societal shift in trust. Many individuals now perceive AI as a more credible source of information than human experts. This phenomenon may be attributed to the authoritative tone employed by AI models, which can lead users to favor chatbot responses over those from librarians.

As users seek to enhance the reliability of AI outputs, some have adopted strategies they believe will improve the accuracy of their interactions. Phrases like “don’t hallucinate” or “write clean code” are often included in prompts, based on the assumption that such instructions will yield higher-quality responses. If these techniques were effective, it stands to reason that major tech companies would have integrated them into their systems already.

The implications of this trend are significant. As AI-generated misinformation continues to proliferate, the role of librarians becomes increasingly vital in guiding the public toward accurate information and resources. The exhaustion voiced by librarians across various institutions highlights an urgent need for greater public awareness of the limitations of AI tools.

In a world where information is more accessible than ever, ensuring the integrity of sources remains a crucial challenge, one that requires collaboration between technological advancements and human expertise.