OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.05.2026, 20:04

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Ethical risks of AI-enabled remote patient monitoring for COPD: a multi-dimensional use case analysis

2025·0 Zitationen·AI & SocietyOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

Abstract Artificial intelligence (AI)-enabled remote patient monitoring (RPM) is promoted as a solution to rising pressures in health care, including personnel shortages and the growing burden associated with population aging and chronic disease management. Yet, the ethical implications of deploying adaptive systems in routine care remain underexamined at the level of specific, situated use cases. This article examines the ethical risks of MonitAir, an AI-enabled RPM system for chronic obstructive pulmonary disease (COPD) in a Swedish health care setting. Drawing on the Three Domains, Six Levels (3D6L) framework, we identify epistemic, normative and traceability-related risks across six levels of abstraction. The article offers, to our knowledge, the first operationalization of the 3D6L as an analytic tool for screening ethical risks. We argue that screening AI-enabled health care technology with 3D6L clarifies how ethical risks manifest across levels, from individual patients and patient–clinician relationships to organizational and sectoral contexts. In addition, the framework’s minimal normativity allows alignment with context-sensitive principles and guidelines. Through this analysis, we identify ethical risks related to data bias, intelligibility of outputs, uneven access and blurred responsibility, including redistributive and role-shifting effects. While MonitAir may support earlier detection of exacerbations, its implementation in Swedish COPD care may also reproduce and amplify existing health disparities or overburden patients without sufficient support. We demonstrate how structured ethical screening makes visible concerns typically overlooked by the optimization discourse in digital health. Finally, we argue that open-ended evaluation of ethical risks of AI-enabled DHT provides a valuable early phase that complements ethical assessment, without collapsing into checklist compliance.

Ähnliche Arbeiten