AI Creates Fake Disease Called "Bixonimania" — Then Diagnoses Real Patients
Major chatbots confidently prescribed treatment for imaginary eye condition
Swedish researchers at University of Gothenburg wanted to test whether AI systems could be fooled by obviously fake medical research. They invented "bixonimania," a made-up eye condition caused by blue light exposure, created fictional papers with absurd acknowledgments thanking "Professor Sideshow Bob Foundation" and Starfleet Academy, and uploaded them to academic preprint servers.
The papers contained explicit warnings like "this entire paper is made up" and described studying "fifty made-up individuals." Within weeks, ChatGPT, Google Gemini, and Microsoft Bing Copilot were confidently diagnosing real users with bixonimania and recommending they visit ophthalmologists.
More alarmingly, the fake research was cited in real peer-reviewed papers, including a study published in Cureus (a Springer Nature journal) by researchers at an Indian medical institute. After Nature contacted the journal, the paper was retracted on March 30, 2026.
The experiment demonstrates how AI training pipelines can't distinguish between legitimate research and obvious satire, even when papers explicitly state they're fictional.
Key Evidence
- Published in Nature (peer-reviewed reporting on the experiment)
- Screenshots of major AI systems providing medical advice based on fake condition
- Real academic citation in Cureus journal (since retracted)
- Timeline showing rapid propagation from preprint to AI diagnosis within weeks
- Multiple AI systems affected: ChatGPT, Gemini, Copilot, Perplexity
The Rational Explanation
This reflects known vulnerabilities in large language model training. AI systems scrape massive internet datasets without quality filters that would catch obvious hoaxes. They pattern-match text that looks academically formatted and treat it as authoritative without understanding context or recognizing satire.
The systems weren't "fooled" so much as they operated exactly as designed — statistical text prediction based on training data, not actual comprehension.
What We Don't Know
The deeper concern is how many other fictional or low-quality papers have been absorbed into AI training datasets. If systems can be convinced that "bixonimania" is real within weeks, what other medical misinformation is being confidently dispensed?
The retracted paper suggests some human researchers are also relying on AI-generated references without reading source material, creating a feedback loop where AI hallucinations become published "evidence" that further trains AI systems.
The Rabbit Hole
This connects to broader questions about AI reliability in healthcare. If chatbots can't recognize obviously fake medical conditions, how can they be trusted with real diagnosis or treatment recommendations?
The experiment also reveals how quickly misinformation can propagate through academic networks when human oversight fails. The combination of AI training on unfiltered data and researchers not verifying sources creates a vulnerability that bad actors could exploit systematically.