Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Ein externer Link zum Volltext ist derzeit nicht verfügbar.
Governing Adaptive Clinical Artificial Intelligence: Structural Failure Modes, Auditability, and Infrastructure for Decision Safety
0
Zitationen
1
Autoren
2026
Jahr
Abstract
This collection reflects a structured research program examining clinical artificial intelligence as an adaptive sociotechnical infrastructure requiring explicit governance constraints. The included works develop a layered analytical framework addressing structural failure modes, deployment level safety considerations, reimbursement driven behavioral incentives, and externally constrained learning architectures. Across these contributions, the Externally Governed Learning Systems (EGLS) framework is introduced as a formal model for separating adaptive computation from institutional decision authority and viability enforcement. The materials in this collection collectively explore how governance mechanisms can be embedded at the infrastructure level to support auditability, reproducibility, and deployment safety in clinical AI systems. The intended audience includes clinicians, health informaticians, machine learning researchers, regulators, policymakers, and institutional leaders engaged in the deployment and oversight of adaptive AI systems in healthcare.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.644 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.550 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.061 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.850 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.