Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Enabling scalable AI for Digital Health: interoperability, consent and ethics support
6
Zitationen
1
Autoren
2021
Jahr
Abstract
This paper proposes an approach for building scalable AI applications in digital health, with a specific focus on addressing interoperability, consent and ethics challenges. These challenges need to be considered in the context of increasingly available tooling for streamlined model development, training, validation, and deployment, while accommodating novel solutions for explainable AI support for clinicians. Such an approach is required because digital health ecosystems involve many data type created by different systems, and often used as part of workflows over different jurisdictional boundaries. Interoperability solutions are needed to support technical and business agreements between parties providing data and services, including knowledge intensive services, such as ML and AI. Computable expression of consent and ethics policies are needed to control how patient information is used, including compliance with regulative rules, possibly from different policy contexts. Our approach, based on the latest interoperability and enterprise policy standards may provide a useful guidance for the practitioners building scalable AI solutions for digital health.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.534 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.423 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.917 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.582 Zit.