This site is intended for health professionals only

Is ChatGPT a valuable resource to support clinical pharmacists with factual knowledge?

Dr Merel van Nuland PhD, lead author of a recent study looking at the impact of natural language processing tools in clinical pharmacy, discusses ChatGPT’s performance and promise in answering factual knowledge questions related to clinical pharmacy practice compared with trained pharmacists’ responses.

With the rapid advancements in natural language processing (NLP) tools such as ChatGPT, there is growing interest in their potential to support clinical decision-making and patient care.

This extends to clinical pharmacy practice, which encompasses medicines optimisation, pharmaceutical preparation and analysis, therapeutic drug monitoring and patient education, among other aspects.

While traditional resources and databases can serve as reference sources for clinical questions, the ability to engage NLP models has yet to be established.

The study

Our study, published in the Journal of Clinical Pharmacology, evaluated ChatGPT’s performance on factual knowledge questions related to clinical pharmacy practice in The Netherlands. The primary objective was to gauge its ability to handle multiple-choice questions, distinct from its proficiency in clinical reasoning.

Performance was assessed via 264 questions sourced from a Dutch application featuring multiple-choice questions designed to maintain a basic knowledge level for clinical pharmacists.

The questions spanned 17 categories: anaesthesiology, intensive care and emergency room; pharmaceutical preparations and analysis; cardiology; geriatrics; haematology; infectious diseases; internal medicine; paediatrics; clinical pharmacology; lung diseases; medical gases; nephrology; neurology; oncology; psychiatry; radiopharmacy; and transplantation medicine.

The evaluation parameters for the responses included accuracy, consistency, quality of substantiation and reproducibility. Accuracy was defined as the correctness of the answer and results were compared with the overall score given by pharmacists.

Responses were deemed concordant if no contradictions were present. Two independent pharmacists graded the quality of the substantiation using a four-point scale. Reproducibility was established by presenting questions at multiple times across different days.

Our results demonstrated that ChatGPT provided accurate responses for 79% of the questions, surpassing pharmacists’ accuracy of 66%. Concordance was high (95%), and the substantiation quality was rated good or excellent for 73% of the questions. Reproducibility was also consistently high within days and between days (> 92%) and across different users.

ChatGPT performed exceptionally well in oncology, nephrology and psychiatry, while its performance was less favourable in the pharmaceutical preparation and analysis; geriatrics; and anaesthesiology, intensive care and emergency room categories.

Addressing ChatGPT limitations

The study intentionally excluded aspects of clinical reasoning and decision-making.

Despite ChatGPT correctly answering 79% of the questions, this percentage would need to be increased to meet patient care standards, where there is no margin for error in clinical practice.

While ChatGPT performs well with factual knowledge questions, clinical decision-making involves a complex interplay of patient-specific factors that require the nuanced judgment of a trained clinical pharmacist.

ChatGPT’s current capabilities do not extend to such intricate decision-making processes. This limitation indicates that the findings cannot be directly extrapolated to its performance in practical, real-world clinical scenarios.

Furthermore, the multiple-choice questions used must reflect the open-ended questions typically encountered by clinical pharmacists in practice.

Lastly, the prompt ChatGPT is given at the start of the process significantly influences the generated responses and could be further optimised, for instance, by defining the resources it must use to generate an answer.

Key messages and implications of ChatGPT

Despite these limitations, our findings show that ChatGPT could serve as a valuable resource for pharmacists in clinical practice, providing factual information independent of national guidelines, such as details on drug interactions or mechanisms of action.

However, ongoing improvements and refinements to ChatGPT’s algorithms and training data are necessary to strengthen its functionality as a robust and reliable tool and enhance its future performance. The expertise and interpretive skills of clinical pharmacists are pivotal to applying this information in the clinical setting.

Critical evaluation by a clinical pharmacist is essential for its practical use. Therefore, integrating ChatGPT into clinical practice should be done cautiously, with a clear understanding of its limitations.

Author

Merel van Nuland PharmD PhD
Hospital pharmacist and clinical pharmacologist, Department of Clinical Pharmacy, Tergooi Medical Center, Hilversum, The Netherlands






Be in the know
Subscribe to Hospital Pharmacy Europe newsletter and magazine

x