Research Indicates That ChatGPT Has Massive Errors When Suggesting Drugs

Pharmacy Services Using AI

On Tuesday, a new study suggested that the free version of ChatGPT may provide inaccurate or incomplete responses—or no answer at all—regarding medications, potentially putting patients at risk.

Researchers at Long Island University found that out of 39 questions related to drugs posed to the free ChatGPT in May, only 10 responses were deemed “satisfactory” based on their criteria.

The chatbot’s answers to the remaining 29 drug-related questions were either inadequate, incorrect, or did not address the question directly.

The study highlights the need for caution among patients and healthcare professionals when using ChatGPT for drug information.

Lead author Sara Grossman, an associate professor of pharmacy practice at LIU, recommended verifying any information obtained from the chatbot with trusted sources, such as a doctor or a government medication information website like MedlinePlus.

An OpenAI spokesperson stated that ChatGPT is designed to inform users that its responses should not replace professional medical advice or traditional care.

The spokesperson also referenced OpenAI’s usage policy, which clarifies that the company’s models are not specifically tuned to provide medical information and should not be used for diagnosing or treating serious medical conditions.

Since its launch about a year ago, ChatGPT has been recognized as one of the fastest-growing consumer internet applications, contributing to a significant year for artificial intelligence.

However, it has also faced concerns related to fraud, intellectual property, discrimination, and misinformation.

Several studies have identified similar issues with ChatGPT’s responses, and the Federal Trade Commission began investigating the chatbot’s accuracy and consumer protection measures in July.

In October, ChatGPT attracted approximately 1.7 billion visits globally, though data on how many users seek medical advice is not available.

The free version of ChatGPT relies on data up to September 2021, potentially missing updates in the rapidly evolving medical field.

ChatGPT (Photo: Getty Images)

The accuracy of the paid versions, which began incorporating real-time internet browsing earlier this year, regarding medication-related queries remains unclear.

Grossman noted that a paid version of ChatGPT might produce better results, but the study focused on the free version to reflect what most users have access to.

She acknowledged that the research represents only a snapshot of the chatbot’s performance from earlier this year and suggested that the free version might have improved since then.

The study, presented at the American Society of Health-System Pharmacists’ annual meeting, was conducted without external funding.

The research involved real questions received by Long Island University’s College of Pharmacy drug information service from January 2022 to April of this year.

Pharmacists tested 45 questions in May, which were reviewed by a second researcher to set accuracy standards. Six questions were excluded due to lack of literature.

ChatGPT failed to address 11 questions directly, provided inaccurate answers to 10, and gave incorrect or incomplete responses to another 12. Despite requests for references, ChatGPT only provided them in eight responses, with each reference being invalid.

One query about drug interactions between Pfizer’s Covid antiviral pill Paxlovid and the blood-pressure medication verapamil resulted in ChatGPT incorrectly stating no interactions were reported.

In reality, these drugs can dangerously lower blood pressure when used together. Grossman highlighted that this could lead to preventable adverse effects if patients are unaware of the interaction.

Another question about converting doses between intrathecal and oral baclofen—a drug for muscle spasms—resulted in ChatGPT providing a method not supported by evidence.

The example given contained a serious error, showing an intrathecal dose in milligrams instead of micrograms, which could lead to a dosage 1,000 times less than needed, potentially causing withdrawal effects such as hallucinations and seizures.

Jessica Smith
Whether dissecting the strategies of successful entrepreneurs or analyzing the impact of global economic shifts, Jessica Smith's insightful narratives provide readers with a deeper understanding of the intricate workings of the business world.