OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 02.04.2026, 04:19

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The Utility of ChatGPT in Subspecialty Consultation for Patients with Metastatic Cancers (Preprint)

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

16

Autoren

2025

Jahr

Abstract

<sec> <title>BACKGROUND</title> Cancer management requires a multidisciplinary approach, often necessitating medical consultation from subspecialists. With the advent of artificial intelligence (AI) technologies like ChatGPT, we hypothesized that these tools may help expedite the consultation process. This study aimed to assess the efficacy of ChatGPT in providing guideline-based sub-specialty recommendations for managing patients with metastatic cancers. </sec> <sec> <title>OBJECTIVE</title> N/A </sec> <sec> <title>METHODS</title> In this, proof-of-concept (PoC) study, patients with metastatic cancers who had at least one consultation referral to subspecialty clinics were eligible. ChatGPT 4.0 was given the most recent clinic note that triggered a sub-specialty consultation. It was then asked to provide an assessment and plan. Two physicians independently assessed the accuracy of diagnoses made by ChatGPT as compared to subspecialty physicians. The primary outcome was the consistency of ChatGPT recommendations with those of subspecialty physicians. Secondary outcomes included the comparison of medical decision-making (MDM) complexity levels between ChatGPT and subspecialty physicians, and the potential time saved by using ChatGPT </sec> <sec> <title>RESULTS</title> In this proof-of-concept study, a total of 75 consecutive eligible patients were included. Their primary diagnoses included prostate cancer (45.3%), kidney cancer (25.3%), bladder cancer (21.3%), testicular cancer (1.3%), and other (6.7%). The referred subspecialty clinic included cardiology (32.0%), hematology (13.3%), hepatology (5.3%), hospice (5.3%), neurology (16.0%), pulmonary (13.3%), and rheumatology (13.3%). Of 75 patient charts reviewed by ChatGPT, 57/75 (76.0%) had the same diagnosis with consultant sub-specialties. In 24 (32.4%) cases, diagnostic adjudication indicated equal accuracy between ChatGPT and physicians. Consistency of treatment plans between ChatGPT and physicians was found in 44 cases (58.7%). ChatGPT recommended additional workup in 65 cases (86.7%). The average number of diagnoses made by ChatGPT were 7.6 per case, compared with 3.4 made by sub-specialty physicians (p&lt;0.0001). The median waiting time for patients to be seen in subspecialty clinics was 60 days (IQR 18–116). The average number of words written in consultation notes by ChatGPT was 415.7 (SD = 66.3), which was significantly lower than subspecialty physicians (1,654.7; p &lt; 0.0001). </sec> <sec> <title>CONCLUSIONS</title> These hypothesis-generating data suggest the potential utility of ChatGPT to assist treating oncologists in managing patients with metastatic cancer who require subspecialty consultations. Further studies are needed to validate our findings. </sec>

Ähnliche Arbeiten