The artificial intelligence tool ChatGPT shows great promise in areas such as drug counselling but falls short of clinical pharmacists‘ skills across a range of clinical practice categories, according to a study by Chinese researchers.
To date, ChatGPT has been studied in several different areas of healthcare but is not always very effective. For instance, it has been found lacking in its ability to provide answers to cardiovascular questions and suggestions in complex case vignettes.
In the present study, published in the British Journal of Clinical pharmacology, the research team set out to evaluate the performance of ChatGPT in key domains of clinical pharmacy practice compared to practicing clinical pharmacists. This included prescription review, patient medication education, recognition of adverse drug reactions (ADR) and their causality, and drug counselling.
The researchers used a mix of questions, some of which were collected from real clinical case studies and others were taken from clinical pharmacist competency assessments.
Five clinical pharmacists independently rated the answers provided by both ChatGPT and the clinical pharmacists on a scale of 0 (completely incorrect) to 10 (completely correct). The mean scores of ChatGPT and clinical pharmacists were then compared.
ChatGPT performance compared
Quantitative analysis revealed that ChatGPT‘s accuracy in answering the questions varied depending on the specific area of practice.
For example, ChatGPT was comparable to clinical pharmacists at drug counselling (mean scores 8.77 vs 9.50, p = 0.07, ChatGPT vs pharmacists). However, the system was less effective than pharmacists for prescription review (5.23 vs. 9.90, p = 0.0089), patient medication education (6.20 vs. 9.07, p = 0.0032), ADR recognition (5.07 vs. 9.70, p = 0.0483) and ADR causality assessment (4.03 vs. 9.73, p = 0.023).
For patient medication education, ChatGPT provided a well-organised and detailed list of
therapeutic indications, dosing regimens and common adverse reactions for each medication. However, answers were sometimes overly verbose and specialised, whereas the clinical pharmacists used layman‘s language to warn patients of common and life-threatening adverse reactions.
In addition, ChatGPT did not guide patients on necessary monitoring items and lifestyle changes.
The researchers concluded that while ChatGPT has the potential as a supplementary tool in clinical settings, further enhancements and refinements are needed, particularly in expanding medicine-specific datasets and augmenting capabilities for advanced reasoning and complex instructions.