In the quiet corners of the internet, a new kind of phantom has emerged: a disease that does not exist. Over the past several months, a purely fictional medical condition began appearing in the responses of conversational AI chatbots. What started as a digital hallucination—a typical quirk of large language models—soon gained an alarming degree of legitimacy, eventually making its way into the pages of a professional medical journal.
The incident serves as a stark illustration of the recursive feedback loops currently haunting the information ecosystem. As AI-generated content increasingly populates the web, it is being re-absorbed by the very algorithms designed to synthesize human knowledge. When a chatbot invents a plausible-sounding pathology, and that invention is subsequently cited or indexed, the fiction acquires a veneer of authority that can deceive even seasoned researchers and editors.
This digital drift echoes famous scientific hoaxes of the past, from the Piltdown Man to the Sokal affair. However, unlike those intentional deceptions, this recent infiltration appears to be a byproduct of the system’s own design—a glitch in the machinery of truth. As the boundary between verified data and algorithmic synthesis thins, the scientific record faces a new challenge: ensuring that the archives of the future are not built on the hallucinations of the present.
With reporting from Le Monde Sciences.
Source · Le Monde Sciences



