Answers provided by OpenAI’s ChatGPT to a series of drug-related questions posed as part of a research by pharmacists found that nearly three-fourths of responses were incomplete or inaccurate.

ChatGPT, which uses generative artificial intelligence (AI) to form responses to users’ prompts using data on the Internet, was challenged by researchers at the American Society of Health-System Pharmacists with real questions posed to Long Island University’s College of Pharmacy drug information service in a 16-month timeframe in 2022 and 2023. The research was presented at the ASHP’s Midyear Clinical Meeting on Tuesday.

Pharmacists first researched and answered 45 questions, and those responses were reviewed by a second investigator to serve as the standard by which ChatGPT’s answers would be judged. Six of those questions were left out due to a lack of literature to supply a data-driven response, leaving 39 questions for ChatGPT to answer.

The research found that ChatGPT provided satisfactory answers to just 10 of the 39 questions posed. Of the other 29 questions, there were 11 cases in which ChatGPT’s responses didn’t directly address the question, 10 instances where it provided an inaccurate response, plus 12 incomplete answers. Researchers also asked ChatGPT to supply references in its responses, which it did in just eight of its answers — each of which included non-existent references per the research.

CHATGPT’S WILD FIRST YEAR

Pharmacy Shelf Drugs

A research by the American Society of Health-System Pharmacists found that nearly three-fourths of ChatGPT’s responses to medication questions were inaccurate or incomplete. (Andreas Arnold/picture alliance via Getty Images / Getty Images)

Healthcare professionals and patients should be cautious about using ChatGPT as an authoritative source for medication-related information,” said Sara Grossman, PharmD, who was a guide author of the research and is an associate professor of pharmacy practice at Long Island University. 

“Anyone who uses ChatGPT for medication-related information should verify the information using trusted sources,” Grossman added.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

ChatGPT OpenAI

According to the pharmacists’ research, ChatGPT provided non-existent references as citations for its responses. Generative AI chatbot can sometimes “hallucinate” and present incorrect information as fact. (LIONEL BONAVENTURE/AFP via Getty Images / Getty Images)

In one case, the researchers asked ChatGPT if there’s a risk of drug interaction between the COVID-19 antiviral Paxlovid and verapamil, which is a medication that lowers blood pressure, and the chatbot said no interactions had been reported for that combination of drugs.

“In reality, these medications have the potential to engage with one another, and combined use may result in excessive lowering of blood pressure,” Grossman said. “Without knowledge of this interaction, a patient may suffer from an unwanted and preventable side effect.”

WHAT IS CHATGPT?

OpenAI ChatGPT Screen

ChatGPT provided accurate responses to just 10 of the 39 medication-related questions posed in the pharmacists’ research. (Jaap Arriens/NurPhoto via Getty Images / Getty Images)

The ASHP research’s findings show that while AI tools appreciate ChatGPT have shown potential in pharmacy and other medical settings, pharmacists should evaluate the use of various AI tools in medication-related use cases and talk to patients about trustworthy sources of information about their medication, according to Gina Luchen, PharmD, ASHP director of digital health and data.

“AI-based tools have the potential to impact both clinical and operational aspects of care,” Luchen said. “Pharmacists should remain vigilant stewards of patient safety, by evaluating the appropriateness and validity of specific AI tools for medication-related uses, and continuing to educate patients on trusted sources for medication information.”

GET FOX BUSINESS ON THE GO BY CLICKING HERE

A spokesperson for ChatGPT-maker OpenAI told FOX Business, “We guide the model to enlighten users that they should not rely on its responses as a substitute for professional medical advice or traditional care.”

Additionally, OpenAI’s usage policies note that “OpenAI’s models are not fine-tuned to supply medical information. You should never use our models to supply diagnostic or treatment services for serious medical conditions.”

Source link