OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 25.04.2026, 22:40

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Evaluating Digital Health Innovation: The Case for Person‐Centred Care Metrics Alongside Efficiency

2026·0 Zitationen·Journal of Advanced NursingOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2026

Jahr

Abstract

Digital tools, such as electronic health records, telehealth services, early warning systems and AI-enabled clinical decision support systems, are rapidly transforming healthcare delivery. These technologies vary considerably in their primary aims; some are explicitly designed to enhance patient safety, while others focus on structuring information or optimising workflows. Many nursing digital tools explicitly target patient safety (Von Gerich et al. 2022), yet prevailing evaluation frameworks prioritise operational metrics over person-centred care dimensions such as equity, therapeutic relationships, and relational quality (Mohammadnejad et al. 2023). Technical design excellence doesn't guarantee person-centred outcomes. Evidence confirms these concerns are not just theoretical. Around 10% of UK adults now use AI chatbots for symptom assessment (Ayre et al. 2025), yet a pre-registered scoping review confirms no comprehensive evaluation framework exists that integrates person-centred care principles with equity considerations for such technologies (Bond and Stacey 2025). This gap between rapid adoption and robust evaluation creates substantial risks: technologies appearing clinically effective may undermine person-centred care through poor workflow integration, exacerbate health inequities through culturally unresponsive design, or impose disproportionate burdens on disadvantaged populations. Efficiency gains alone do not fully capture the value of healthcare innovation; the goal must be to improve outcomes that matter most to patients and care providers, including, but not limited to, productivity. As McCormack (2025) stresses, evaluations must go beyond system metrics to highlight the experiences and values of staff and patients. Thus, reconsidering what counts as meaningful progress is essential for digital innovation to truly support the human side of health and care. This commentary critiques productivity-biased evaluation approaches, advocating balanced person-centred metrics alongside efficiency. Evaluations must reflect all perspectives as data ultimately shapes what gets valued, funded, and implemented. Efficiency claims dominate digital health discourse; reduced documentation and increased throughput, yet evidence linking these to better patient/staff experiences remains mixed (Bhuyan et al. 2025). But do efficiency gains reliably translate into person-centred care quality? Telehealth expanded COVID-19 access where in-person care was impossible yet raised valid questions about non-verbal cue loss in mental health assessments (Mohammadnejad et al. 2023). Ambient voice systems (AVS) promised to reduce cognitive load by automating administrative tasks; however, persistent staffing shortages often consume “saved time”, leaving communication strained (Garcia et al. 2024). Thereby, evaluation ends up measuring clicks saved, not care enhanced (McCormack 2025). The equity implications of digital health also warrant urgent scrutiny and are inextricably linked to how we define efficiency. Equity extends beyond simple access: digital poverty, the inability to engage fully online due to inadequate devices, limited connectivity, data costs, or other digital capability barriers such as sightedness and homelessness, creates systematic exclusion (Digital Poverty Alliance 2022; Helsper 2021). Lower-income populations face higher data costs whilst owning less capable devices, creating “relative digital deprivation” where gaps widen even as overall connectivity improves. Efficient telehealth that works seamlessly for digitally privileged patients excludes those experiencing digital poverty; usage rates hide non-users (Anastasiadou et al. 2025), and technologies may impose disproportionate burdens on disadvantaged populations through what Herd and Moynihan (2018) term “administrative burden”, the learning, psychological and compliance costs of navigating systems designed without these needs in mind. For instance, digital handover systems assuming English fluency create barriers for international nurses despite native staff efficiency gains, underscoring need for culturally tailored training (Bhuyan et al. 2025; World Health Organization 2021). Perhaps most strikingly, recent evidence shows patients view AI chatbots as more empathetic than clinicians (Chen et al. 2025), a finding that highlights how profoundly these technologies are reshaping care relationships. We urgently need evaluation frameworks capable of capturing whether technologies enhance therapeutic connection or merely process interactions efficiently. Policy documents, procurement criteria and funding calls prioritise quantifiable efficiency indicators (throughput, utilisation rates, cost savings) despite sophisticated nursing research offering patient-reported experience measures and qualitative implementation studies (Department of Health and Social Care 2025; World Health Organization 2021). McNamara's Vietnam-era obsession with body counts over strategic reality illustrates the critical flaw of efficiency-based evaluations: measuring what is easy (quantitative efficiency) while ignoring what matters (qualitative PCC dimensions) (Kelleher 2021). For instance, electronic prescribing might reduce processing time, but it can interrupt eye contact, discourage patient questions, and create cognitive overload that masks clinical anxiety. Hence, numbers show gains while therapeutic quality deteriorates invisibly. Length-of-stay reductions celebrate efficiency but miss rushed, anxious discharges leading to readmission, itself a quantifiable failure that arrives too late to inform the original decision. Addressing these evaluation challenges requires methodological approaches capable of asking “what works, for whom, in what contexts and how” questions, which acknowledge technologies produce differential impacts across populations (Pawson et al. 2005). The Burden of Treatment Theory (May et al. 2014) emphasises that healthcare interventions impose work on patients; digital technologies may amplify this burden for those already disadvantaged, compounding mechanisms of inequity (Digital Poverty Alliance 2022). Evaluation frameworks must actively examine these mechanisms rather than assuming universal benefit. The US Consumer Assessment of Healthcare Providers and Systems (CAHPS) family of surveys(Hospital CAHPS (HCAHPS) and Clinician & Group CAHPS (CG-CAHPS)) already capture whether patients feel heard, respected and responsive to their circumstances. However, the challenge is not the absence of a validated framework but ensuring they carry equivalent weight to efficiency metrics in procurement and funding decisions. This recognition sets the stage for examining how policy frameworks are beginning to acknowledge these limitations and what more is needed to translate aspirational statements into meaningful change. Policy frameworks recognise the need for person-centred digital health, but evaluation remains a challenge. Three levels matter: person-centred care environments, outcomes (safety, quality, availability), and evaluation metrics, which must reflect all stakeholders, not just efficiency (World Health Organization 2021). Lord Darzi's 2024 independent investigation of NHS England warned that digitalisation efforts have been compromised by poor integration, yet procurement criteria continue prioritising technical specifications over person-centred impact. The National Institute for Health and Care Excellence (2022) Evidence Standards Framework for digital health technologies, while comprehensive on technical performance, also provides limited guidance on systematically evaluating person-centred or equity dimensions. Technical specs dominate procurement while person-centred measures carry minimal weight. Resource limitations in low- to middle-income countries increase the risk of fragmentation, further reducing the “practical efficacy” of standardised evaluations (Borges do Nascimento et al. 2023). Evaluation framework must therefore include diverse groups of stakeholders and address equity gaps. When digital health moves faster than our capacity to evaluate its human impact, we risk entrenching inequities under the guise of progress (Duffy et al. 2025). Embedding person-centred values at every level demands sustained commitment, dedicated resource and evaluating these technologies must include frameworks that balance operational metrics with person-centred care dimensions. Digital health success demands balanced evaluation: efficiency alongside relational nursing priorities. Will evaluation frameworks empower person-centred nursing innovation, or entrench the narrow reign of efficiency? Nursing-led realignment requires co-design, workload validation and equity assessments in procurement. Rethinking success means asking: what gets measured, funded, prioritised and is equity considered? The authors have nothing to report. The authors declare no conflicts of interest. The authors have nothing to report.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Digital Mental Health InterventionsArtificial Intelligence in Healthcare and EducationElectronic Health Records Systems
Volltext beim Verlag öffnen