Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Ethical Concerns Regarding the Use of Large Language Models in Healthcare
13
Zitationen
2
Autoren
2023
Jahr
Abstract
Ethical Concerns Regarding the Use of Large Language Models in Healthcare Large language models (LLMs) have brought new perspectives in healthcare but present also some risks and pitfalls.We thank Daungsupawong et al. 1 for their comments following the publication of our comprehensive literature review of Natural Language Processing in vascular surgery. 2 LLMs offer promising perspectives of applications including patient care (by providing medical knowledge and patient empowerment, assistance for writing, translations, summaries), education (with interactive learning and opportunities to develop personalised education), and research (by facilitating access to scientific knowledge, science communication, or production of scientific content). 3evertheless, the field is in its infancy, and we completely agree that clinicians, patients, and society should be very cautious and aware of the limitations and risks of LLMs. 4 While LLMs reproduce some of the characteristics of human language, it is important to keep in mind that they do not comprehend the language they are dealing with, neither the input data (used for the training) nor the output data (responses generated). 4,5As LLMs are dependent on data that have been used for the training, they can be biased due to misinformation, errors, or outdated information in the training dataset. 4,5The models have no self assessment of the generated content and, therefore, no control ever whether the input information is true or accurate.There is thus a critical lack of accountability.Finally, as LLMs are probabilistic algorithms, they might not provide the same answer to the same task or when the question is repeated multiple times, making it extremely challenging to evaluate their reliability and reproducibility. 4,5Like other AI driven applications, LLMs raise major ethical and legal concerns regarding their applications in healthcare.This includes questions related to health data protection, equity and fairness, safety and security, transparency, responsibilities and accountability, clinical benefits and costs, acceptability, perception, and integration by patients and health professionals. 6ethods for evaluating LLMs in the real world remain unclear and there is a critical need to build guidelines and recommendations.Specific standards to assess accuracy and quality of AI applications in healthcare are currently being developed 7 and it would be of great interest to build specific guidelines for LLMs to help evaluate their potential benefits and risks before their implementation in clinical practice.As highlighted by Shah et al., health professionals cannot step aside but should be proactive to ensure that AI driven innovations will augment human expertise without replacing it in the aim to improve care provided to patients. 8
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.