Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Risk prediction tools in multiple long-term conditions management: a qualitative study
0
Zitationen
6
Autoren
2025
Jahr
Abstract
BACKGROUND: Risk stratification is integral to providing good care, but its value in managing patients with multiple long-term conditions (MLTC) remains uncertain. AIM: To explore the perspectives of healthcare professionals, patients, and carers on the benefits and the challenges of using risk prediction in MLTC management. DESIGN AND SETTING: Analysis of interviews with 30 professionals and six focus groups with 28 patients with MLTC/carers in four Scottish integrated health and social care partnerships was undertaken. METHOD: Data were collected between May 2023 and May 2024, and analysed thematically. RESULTS: Three themes were identified: legitimation of risk prediction tools, workload implications of risk prediction, and reconfiguration of risk prediction tools. Healthcare professionals questioned the clinical utility of existing tools, noting a lack of clinical nuance and overreliance on blunt algorithms. They stressed increased workload implications of new tools and the need for seamless integration, clearer guidance on how to respond to prediction, and inclusion of psychosocial factors and meaningful outcomes. Artificial intelligence (AI) and routine data were seen as promising in enhancing predictive accuracy and real-time application of new tools, but effective use requires better IT systems, training, and clinical oversight - 'human-in-the-loop'. Patients/carers expressed mixed views, warning that risk communication could cause unnecessary anxiety, undermine autonomy, and be less relevant in older age. They emphasised the importance of risk communication that reflects social context and aligns with patient priorities. CONCLUSION: There was some support for using AI-informed tools for risk stratification, provided they have no workload implications, complement clinical judgement, and account for patient clinical complexity and preferences.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.693 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.598 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.124 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.871 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.