Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Ethical risks of AI-enabled remote patient monitoring for COPD: a multi-dimensional use case analysis
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Abstract Artificial intelligence (AI)-enabled remote patient monitoring (RPM) is promoted as a solution to rising pressures in health care, including personnel shortages and the growing burden associated with population aging and chronic disease management. Yet, the ethical implications of deploying adaptive systems in routine care remain underexamined at the level of specific, situated use cases. This article examines the ethical risks of MonitAir, an AI-enabled RPM system for chronic obstructive pulmonary disease (COPD) in a Swedish health care setting. Drawing on the Three Domains, Six Levels (3D6L) framework, we identify epistemic, normative and traceability-related risks across six levels of abstraction. The article offers, to our knowledge, the first operationalization of the 3D6L as an analytic tool for screening ethical risks. We argue that screening AI-enabled health care technology with 3D6L clarifies how ethical risks manifest across levels, from individual patients and patient–clinician relationships to organizational and sectoral contexts. In addition, the framework’s minimal normativity allows alignment with context-sensitive principles and guidelines. Through this analysis, we identify ethical risks related to data bias, intelligibility of outputs, uneven access and blurred responsibility, including redistributive and role-shifting effects. While MonitAir may support earlier detection of exacerbations, its implementation in Swedish COPD care may also reproduce and amplify existing health disparities or overburden patients without sufficient support. We demonstrate how structured ethical screening makes visible concerns typically overlooked by the optimization discourse in digital health. Finally, we argue that open-ended evaluation of ethical risks of AI-enabled DHT provides a valuable early phase that complements ethical assessment, without collapsing into checklist compliance.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.697 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.602 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.127 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.872 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.