Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Meaning Matters for Large Language Models
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Large language models (LLMs) have achieved remarkable adoption. While AI providers position their systems as assistive helpers and knowledge tools, users increasingly employ them for open-ended interactions seeking life-advice or creative exploration. This raises questions about the role of LLMs in such meaning-making activities, and the extend to which LLMs can access and encode meaning. In this conceptual essay, we apply Paul Ricoeur's hermeneutic philosophy to distinguish between structural and existential forms of meaning, revealing that LLMs can function as sophisticated conversational partners capable of engaging their vast “text” while lacking access to experientially-grounded understanding. We come to interpret user prompting as genuine hermeneutic encounters that enable meaning-making. We further reveal that hallucinations, the propensity of LLMs to generate plausible sounding yet incorrect responses, represent inevitable architectural trade-offs rather than eliminable technical failures. Our framework suggests new directions for LLM design that embrace generative capabilities and establishes principles for responsible user engagement.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.393 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.502 Zit.