Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ESMO guidance on the use of Large Language Models in Clinical Practice (ELCAP)
12
Zitationen
21
Autoren
2025
Jahr
Abstract
BACKGROUND: Large language models (LLMs) are rapidly being integrated into health care, with substantial implications for oncology practice. The European Society for Medical Oncology (ESMO) developed the ESMO guidance on the use of Large Language Models in Clinical Practice (ELCAP) to provide a structured framework and basic guidance for their safe and effective application in oncology. PATIENTS AND METHODS: Between November 2024 and February 2025, a multidisciplinary group of 20 experts convened under the ESMO Real World Data and Digital Health Task Force. Using literature review and a Delphi consensus process, the panel defined three categories of LLM use in oncology: type 1 (patient-facing applications), type 2 [health care professional (HCP)-facing applications], and type 3 (background institutional systems). Consensus statements were developed for each type to provide basic practical guidance. RESULTS: ELCAP highlights opportunities such as improved patient education and symptom management, streamlined clinical workflows, and enhanced data processing. At the same time, it addresses challenges including data privacy, algorithmic bias, regulatory compliance, and the risk of unsupervised use. The framework emphasises human oversight, protection of patient privacy, and alignment with clinical and ethical standards. Patient-facing tools should complement, not replace, professional advice and should be embedded in supervised care pathways. HCP-facing and background systems may improve efficiency and decision support but require systematic validation, transparency, and continuous monitoring. CONCLUSIONS: ELCAP provides a three-tier framework and basic practical guidance for LLM use in oncology. ESMO supports efforts to use this framework to improve patient care, but warns against unsupervised or unvalidated use.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.687 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.591 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.114 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.867 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Autoren
- Evelyn Wong
- Loïc Verlingue
- Mihaela Aldea
- Maria Alice Franzoi
- Renato Umeton
- Susan Halabi
- Nadia Harbeck
- Alice Indini
- Arsela Prelaj
- Emanuela Romano
- Elizabeth Smyth
- Iain Tan
- Antonios Valachis
- J.-F. Vibert
- Isabella C. Wiest
- Yongjie Yang
- Stephen Gilbert
- George Kapetanakis
- George Pentheroudakis
- M. Koopman
- Jakob Nikolas Kather
Institutionen
- National Cancer Centre Singapore(SG)
- Centre Léon Bérard(FR)
- Université Paris-Saclay(FR)
- Institut Gustave Roussy(FR)
- Inserm(FR)
- St. Jude Children's Research Hospital(US)
- Cornell University(US)
- Massachusetts Institute of Technology(US)
- Duke University(US)
- Duke Medical Center(US)
- Duke University Hospital(US)
- Breast Center(CH)
- Ludwig-Maximilians-Universität München(DE)
- Fondazione IRCCS Istituto Nazionale dei Tumori(IT)
- Institut Curie(FR)
- Oxford Applied Research (United Kingdom)(GB)
- Oxford BioMedica (United Kingdom)(GB)
- Science Oxford(GB)
- Örebro University(SE)
- Heidelberg University(DE)
- University Hospital Heidelberg(DE)
- University Hospital Carl Gustav Carus(DE)
- National Health Research Institutes(TW)
- Else Kröner-Fresenius-Stiftung(DE)
- Hellenic Cancer Society(GR)
- European Society for Medical Oncology(CH)
- Utrecht University(NL)
- University Medical Center Utrecht(NL)
- National Center for Tumor Diseases(DE)