Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
How generative AI reconfigure clinician–AI and clinician–patient relationships
0
Zitationen
4
Autoren
2026
Jahr
Abstract
The rapid uptake of generative AI (GAI) systems in clinical settings—particularly large language models (LLMs) such as DeepSeek—marks a transformative moment for healthcare and necessitates timely regulatory responses to protect safety and equity. In current practice, LLM-based GAI is entering clinical practice through a distinctive ‘dual-interface’ configuration. On one side, models locally deployed or fine-tuned by healthcare institutions are embedded into hospital information systems and patient-facing portals to streamline care processes, support clinical decision-making and facilitate personal health management.1 On the other, general-purpose models developed by companies, such as OpenAI, Google and DeepSeek, are reaching clinicians and patients through a growing ecosystem of chatbot applications.2, 3 These channels make GAI increasingly accessible to both sides of the medical encounter. Clinicians use GAI to retrieve medical knowledge, generate analytic reasoning and obtain suggestions for diagnostic and therapeutic decisions (DTD).4 Simultaneously, patients consult these models about symptoms, use them to interpret complex clinical rationales and review clinicians’ recommendations.5 This dual-interface setting is shifting healthcare away from two separate dyads—‘clinician–patient’ and ‘clinician–AI tool’ (Figure 1A)—toward a configuration where multiple human–AI relations coexist (Figure 1A–D). This Letter examines two linked axes of change. First, when GAI operates as an interactive cognitive collaborator, it enters the clinician-led chain-of-thought (CoT). In configuration (B), a previously clear ‘clinician–tool’ relation becomes a clinician–AI collaboration requiring a redefinition of roles. Second, when systems capable of generating clinical CoT interact directly with patients and act as decision-making agents, configurations (C)–(D) recast the clinician–patient relationship—its modes of explanation, patient involvement and trust—into a tripartite arrangement that explicitly includes AI. For decades, medical AI—from rule-based expert systems to deep learning models—was positioned as a ‘tool’. These systems rarely prompted fundamental debate about role sharing,6 because they did not enter the core CoT that structures clinical practice: the process from problem formulation and evidence integration to DTD, which we refer to as the CoT for clinical reasoning and decision-making (clinical CoT). In the traditional configuration (Figure 1A), the clinical CoT is held exclusively by the clinician. AI systems are invoked only at specific points at the clinician's discretion. Even when they ‘compute faster’, they primarily extend perceptual capacities and function as citable evidence or auxiliary signals rather than genuine ‘speakers’ in clinical deliberation. This configuration persisted because clinical reasoning is fundamentally language-mediated—an unfolding professional exchange where language is both the medium of intellectual interaction and the vehicle for judgment.7 For earlier AI, natural language understanding and generation remained a persistent bottleneck. The advent and scaling of transformer-based conversational LLMs such as ChatGPT have eroded this bottleneck, enabling GAI to operate in a human-like, interactive linguistic space.4 GAI systems can now process medical information, integrate heterogeneous data and produce evidence-structured reasoning along with DTD recommendations that amount to a more complete clinical CoT.8 In this setting, AI outputs are no longer ‘signals’ requiring human translation; they now appear as intelligible contributions directly comparable with human judgment. Consequently, GAI has shifted medical AI from a bounded tool toward a cognitive collaborator that substantively participates in the clinical CoT (Figure 1B). This shift also generates new normative demands. Across different settings, it is now necessary to specify more precisely the legitimacy of AI involvement in the clinical CoT—for example, whether it should be confined to serving as a tool for prompting, a collaborative agent co-generating reasoning or even an authorized decision-maker. Correspondingly, questions arise regarding how far clinicians’ duties extend in prompting, reviewing and correcting model outputs, and where to draw responsibility boundaries for system providers regarding model design, updating and failure—all of which will require explicit responses in future regulatory frameworks. In the traditional configuration (Figure 1A,B), patients have a direct relationship only with clinicians, whereas AI remains invisible. The reasoning behind DTD largely remains within professional discourse, leaving patients with limited insight into how these decisions are made while they shoulder the outcomes and uncertainty.9 Meanwhile, constrained resources and heavy workloads further compress the time clinicians can devote to explanation and shared decision-making, leaving patients’ expectations for a participatory process unmet. Such a structurally constrained setting has long been associated with disputes and breakdowns of trust in clinical encounters and makes it difficult to realize the ethical ideal of patient-centred care in practice.10 In this context, advances in GAI have enabled a growing range of chatbot applications to interact directly with patients (Figure 1C,D). Patients may encounter such systems within clinical settings, where they are used to supplement clinicians’ explanations and to elaborate on different DTDs in more accessible terms, or they may access them independently before or after consultations to ask questions about their symptoms, test results and proposed treatment plans. In these ways, GAI functions as an additional, structured explanatory resource, creating a parallel channel alongside traditional clinician–patient interaction and enabling patients to engage more actively in understanding and participating in the clinical CoT that shapes their own care. However, this parallel channel also generates institutional challenges. Although GAI may enhance transparency, divergent explanations from clinicians and GAI can create discord. If the system's reasons diverge from clinicians’ judgments, it may weaken patients’ trust. In cases of adverse outcomes, the interplay between clinician- and AI-based accounts further complicates the attribution of responsibility. Moreover, if clinicians are expected to verify patient-directed outputs or mediate three-way exchanges, GAI can shift from sharing the explanatory workload to adding new communicative and liability burdens for clinicians. The scope of clinicians’ duties to review, qualify or override such outputs remains largely unsettled at the normative level. In conclusion, GAI marks a structural shift from AI as a peripheral tool to a cognitive participant in the relational architecture of clinical care. In this emerging landscape, governance must prioritize layered regulatory frameworks tailored to specific clinician–AI–patient configurations, clarifying how reasoning, explanation and decision-making authority—and their attendant liabilities—are allocated between human and artificial agents. Tianyi Shen conceived the manuscript. Tianyi Shen and Xinru Wang conducted background research and prepared the initial draft. Yi Zhang provided overall guidance and revised the manuscript. Yajuan Zhang provided expert input. The authors have nothing to report. The authors declare no conflicts of interest. This research was supported by the “Global Open Research Program on Sustainable Social Value (SSV Open Program)”, and the program's funding was provided by the Institute for Sustainable Social Value, Tsinghua University. The authors have nothing to report. The authors have nothing to report. Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.