Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Empathy in AI for Health and Care Settings: How It’s Defined, Built, and Measured – A Scoping Review Protocol (Preprint)
0
Zitationen
3
Autoren
2026
Jahr
Abstract
<sec> <title>BACKGROUND</title> “Empathy” is widely discussed in health and care settings and is increasingly claimed as an attribute of AI (artificial intelligence) systems (e.g., socially assistive robots, chatbots), but the term is used inconsistently across the literature. In research on AI in these settings, it is often unclear what authors mean by “empathic AI”, what systems do that is intended to be empathic, and how empathy is assessed. This matters because perceived empathy can shape users’ experience of AI-mediated support and their willingness to engage with these systems. </sec> <sec> <title>OBJECTIVE</title> To map how empathy is defined, operationalised, and evaluated in peer-reviewed AI research in health and care settings, and to identify recurring design features associated with higher perceived empathy. </sec> <sec> <title>METHODS</title> This protocol outlines a scoping review following Joanna Briggs Institute (JBI) guidance and reported using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR). We use “AI” as an umbrella term and will extract and classify each system’s type (e.g., rule-based or large language model–based). We will search PubMed (MEDLINE), Embase, PsycINFO, CINAHL, Scopus, IEEE Xplore, and the ACM Digital Library. Two reviewers will screen titles/abstracts (ASReview) and full texts (Rayyan). We will extract study characteristics, empathy definitions/framing, empathy-related system behaviours/design features, and evaluation methods, and synthesise findings thematically. </sec> <sec> <title>RESULTS</title> The review will produce (1) a summary of how empathy is defined in AI research in health and care settings, (2) a grouped list of the main empathic behaviours and design features described, and (3) an overview of how empathy is measured across studies. Where studies report empathy ratings, we will summarise which features are most commonly present in higher-rated systems within comparable contexts. </sec> <sec> <title>CONCLUSIONS</title> The review will provide a clearer picture of what researchers mean by “AI empathy” in health and care settings and what system features are most commonly used when trying to build it. These findings may help guide the development of more empathic AI systems. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.357 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.221 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.640 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.482 Zit.