AI Health Alert: 2023 Dictionary.com Word of the Year – Hallucinate.

Key Points:

  • Dictionary.com has named ‘hallucinate’ as its 2023 Word of the Year, defining it in terms of Artificial Intelligence (AI) producing false information contrarily to the user’s intentions.
  • Increasing proliferation of AI has led to a rise in misinformation, creating significant health implications.
  • The World Health Organization and the American Medical Association have issued warnings regarding AI-generated misinformation.

In the expanding world of artificial intelligence, the term ‘hallucinate’ has taken on a new definition, leading Dictionary.com to name it as its 2023 Word of the Year. In the context of AI, ‘hallucinate’ is defined as the production of false information contrary to the intent of the user, presented as if factual. The rise of AI has been accompanied by a surge in such ‘hallucinations’, increasing the risk of misinformation and its potential damaging effects.

Stage for such AI ‘hallucinations’ to occur ranges from health advice on chatbots to political misinformation on social media platforms. For instance, a study in JAMA Internal Medicine highlighted OpenAI’s GPT Playground’s ability to generate over 17,000 words of disinformation related to vaccines and vaping within just 65 minutes. Additionally, generative AI tools generated 20 associated realistic images in less than 2 minutes.

AI tools can also churn out misleading information inadvertently, as seen in a study presented at the American Society of Health-System Pharmacists’s Midyear Clinical Meeting. Researchers found that out of 39 medication-related questions answered by AI ChatGPT, only 10 were satisfactory. An example is ChatGPT wrongly claiming that Paxlovid, a Covid-19 antiviral medication, and verapamil, a blood pressure medication, have no interactions, contrary to the truth that combining these could result in potentially dangerously low blood pressure levels.

While such AI ‘hallucinations’ cover various areas, they hold significant health implications. Misinformation can cause mental and emotional stress and even risks causing people to question their reality. The World Health Organization and the American Medical Association have issued statements cautioning against AI-generated misinformation. Despite these warnings, AI hallucinations are expected to continue growing in the coming years emphasising the need for further action.