Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Keeping AI in medicine and radiology within the framework of scientific method: measuring to close the epistemic gap
1
Zitationen
2
Autoren
2025
Jahr
Abstract
The classical scientific method-systematic observation, question formulation, hypothesis construction, experimental testing, analysis (including rejection or refinement), conclusion, and transparent communicationstructures reliable inference in medicine, based on the long way of development of modern science.Contemporary artificial intelligence (AI) and, in particular, foundation/large language models, seem to be inverting this logical one-way order by proposing "end-to-end" systems and supplying post hoc rationales.We argue that this "black-box acceleration" risks erasing intermediate epistemic steps unless development and evaluation are explicitly re-embedded in the workflow."Explainable" AI does not guarantee recovery of reasoning because of an epistemic gap: explanations are constrained to human clinical concepts even when a model's decision basis may be non-human yet causally valid, non-human and spurious, or human-readable but only post hoc.Of note, medical imaging AI studies show saliency instability, shortcut learning, and distributional fragility, especially when a system is translated to new/unknown contexts.Training and validating systems on the deployment population within a defined technical context reduces domain mismatch and improves calibration but can entrench local biases and impair portability to other populations or different technology systems.We herein propose a model of institutional AI program-hospital-embedded pipelines with auditable provenance, preregistered evaluations, task-grounded tests of causal alignment and invariance, publication of failures, and local-plus-external validation-to preserve the stepwise logic of science while accepting practical opacity.Finally, the detection of failures of AI systems is not only an ethical duty but also a way to improve the systems' performance and be considered for negotiations between AI developers and medical centers.The described canonical sequence of the scientific method remains the core scaffold of biomedical investigation.Its historical lineage is well established: Galileo Galilei integrated measurement with controlled experiment and mathematical analysis [1]; Francis Bacon formalized inductive empiricism [2]; Ren Descartes emphasized methodological doubt and structured reasoning [3]; Isaac Newton operationalized a hypothetic-deductive program with testable predictions grounded in mathematical laws [4].In the nineteenth and twentieth centuries, medical science assimilated statistical design and causal inference through Pierre-Charles A. Louis's "numerical method" [5], Ronald A. Fisher's experimental design [6], Austin Bradford Hill's randomized trials and causality criteria [7], and Archibald L. Cochrane's insistence on effectiveness and efficiency in health services research [8].Karl R. Popper's falsificationism reframed scientific progress as the survival of hypotheses under attempted refutation [9].The theoryleadenness of observation-as emphasized by Norwood R. Hanson [10] and later by Thomas S. Kuhn [11]-shows that data are interpreted through conceptual lenses.This insight does not entail epistemic relativism, but it does demand explicit models and reproducible tests that can be scrutinized and challenged across competing paradigms.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.349 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.219 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.631 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.480 Zit.