Share

A US medical journal has issued a stark warning against using artificial intelligence for health issues after a 60-year-old man was hospitalised for three weeks after following medical advice from ChatGPT.
The patient developed a rare condition called bromism after the chatbot recommended he replace table salt with sodium bromide to reduce his salt intake.
According to a paper published in the Annals of Internal Medicine, the man “decided to conduct the personal experiment” over a period of three months. This led to him developing bromism, a condition once common in the 19th century when bromine tablets were used as a sedative.
Symptoms of the condition can include psychosis, hallucinations, anxiety and skin problems. The man was admitted to the emergency department believing his neighbour was poisoning him and later had to be sectioned and treated with anti-psychotic drugs.
While doctors were unable to access the patient’s original chat history, they conducted their own test on ChatGPT to investigate the incident. Their query about a substitute for table salt returned a similar dangerous recommendation: sodium bromide. Crucially, the chatbot did not provide any specific health warnings or ask why the user was seeking such information, a conversation a medical professional would have had.
The case highlights the dangers of AI “hallucinations” and the spread of misinformation in health-related queries. While acknowledging the potential for AI to be a bridge between scientists and the public, the article’s authors warned that the technology can “fuel the spread of misinformation.”
In response, a spokesperson for OpenAI emphasised that users should never rely on the service’s output as a substitute for professional advice. The company has since launched a new update, GPT-5, which it claims provides more accurate responses and is better at flagging potential health concerns.
Related Posts
Discover more from Tech Digest
Subscribe to get the latest posts sent to your email.

