
A US medical journal has issued warnings against using ChatGPT for health information after a 60-year-old man who used ChatGPT for dietary advice developed bromism (bromide toxicity) and wound up in the hospital.
According to an article published in the Annals of Internal Medicine, a case in which the man followed advice from ChatGPT about eliminating sodium chloride from his diet.
He started consuming sodium bromide, a sedative which was previously used in the early 20th century, despite it being intended for non-dietary and industrial uses.
The patient developed symptoms, including paranoia, excessive thirst, psychosis, and insomnia.
Initially, the man believed that his neighbour was giving him poison and refused to get treated in the hospital.
After he was rushed to the hospital, doctors confirmed bromism, a condition that once led to a number of psychiatric admissions.
The article underscored concerns regarding an artificial intelligence (AI)-generated health misinformation.
The authors from the University of Washington mentioned that they couldn’t verify that ChatGPT could verify the exact conversation with the chatbot, but found it.
When they tried testing by themselves, it also suggested bromide as a possible chloride substitute, without any warnings.
The case highlights the risks of utilising AI-centric tools such as ChatGPT for medical advice. While OpenAI claimed the latest versions are enhanced and better at notifying health-related risks.
Furthermore, health professionals are advised to consider AI as a source of misinformation when analysing patients.