Harnessing Artificial Intelligence in Bariatric Surgery: Comparative Analysis of ChatGPT-4, Bing, and Bard in Generating Clinician-Level Bariatric Surgery Recommendations

Yung Lee, MD, Thomas Shin, MD PhD, Léa Tessier, MD, Arshia Javidan, MD MSc, James Jung, MD PhD, Dennis Hong, MD MSc, Andrew T. Strong, MD, Tyler McKechnie, MD MSc, Sarah Malone, BSc, David Jin, BHSc, Matthew Kroh, MD, Jerry T. Dang, MD PhD, ASMBS Artificial Intelligence and Digital Task Force

Abstract

Background

The formulation of clinical recommendations pertaining to bariatric surgery is essential in guiding healthcare professionals. However, the extensive and continuously evolving body of literature in bariatric surgery presents considerable challenge for staying abreast of latest developments and efficient information acquisition. Artificial intelligence (AI) has the potential to streamline access to the salient points of clinical recommendations in bariatric surgery.

Objective

The study aims to appraise the quality and readability of AI-chat-generated answers to frequently asked clinical inquiries in the field of bariatric and metabolic surgery.

Setting

Remote.

Methods

Question prompts inputted into AI large language models (LLMs) were created based on pre-existing clinical practice guidelines regarding bariatric and metabolic surgery. The prompts were queried into three LLMs: OpenAI ChatGPT-4, Microsoft Bing, and Google Bard. The responses from each LLM were entered into a spreadsheet for randomized and blinded duplicate review. Accredited bariatric surgeons in North America independently assessed appropriateness of each recommendation using a 5-point Likert scale. Scores of 4 and 5 were deemed appropriate, while scores of 1 to 3 indicated a lack of appropriateness. A Flesch Reading Ease (FRE) score was calculated to assess the readability of responses generated by each LLMs.

Results

There was a significant difference between the three LLMs in their 5-point Likert scores, with mean values of 4.46 (SD 0.82), 3.89 (0.80), and 3.11 (0.72) for ChatGPT-4, Bard, and Bing (P<0.001). There was a significant difference between the three LLMs in the proportion of appropriate answers, with ChatGPT-4 at 85.7%, Bard at 74.3%, and Bing at 25.7% (P<0.001). The mean FRE scores for ChatGPT-4, Bard, and Bing, were 21.68 (SD 2.78), 42.89 (4.03), and 14.64 (5.09), respectively, with higher scores representing easier readability.

Conclusion

LLM-based AI chat models can effectively generate appropriate responses to clinical questions related to bariatric surgery, though the performance of different models can vary greatly. Therefore, caution should be taken when interpreting clinical information provided by LLMs, and clinician oversight is necessary to ensure accuracy. Future investigation is warranted to explore how LLMs might enhance healthcare provision and clinical decision-making in bariatric surgery.