Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Relevance of Grounding AI for Health Care
0
Zitationen
1
Autoren
2025
Jahr
Abstract
As large language models (LLMs) like GPT-4 are increasingly deployed in clinical and administrative healthcare settings, questions about their conceptual grounding take on renewed urgency. While concerns about the lack of sensorimotor experience in symbolic AI systems have been long discussed in cognitive science and philosophy of mind, their practical implications in medicine remain underexplored. This paper revisits the grounding problem through the lens of contemporary healthcare applications, arguing that the unique demands of medical reasoning - interpretive nuance, ethical sensitivity, and contextual depth-amplify the limitations of ungrounded AI. By reframing classic debates, such as Searle's Chinese Room and the Symbol Grounding Problem, within real-world clinical contexts, we highlight specific risks that emerge when LLMs are treated as epistemic agents rather than tools.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.393 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.502 Zit.