Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Cascade Amplification in Administrative AI Pipelines: When Feedback Loops Turn Additive Errors into System Failures
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Administrative artificial intelligence tools in primary care — scheduling, documentation, coding, billing, referral management — are routinely classified as lower-risk than clinical decision-support systems, on the basis of evaluating each tool in isolation. We simulate a five-node administrative AI pipeline on a Synthea primary care cohort (n = 1,000 patients, 100 Monte Carlo iterations, seed 42), calibrated with parameters from published empirical evidence including a conservative +2% wRVU inflation from independent longitudinal data, an 8 percentage point E/M upshift from national claims analysis, and a 28% triage error rate from external validation. We introduce the Cascade Amplification Factor (CAF), a metric that distinguishes additive (CAF = 1.0) from superlinear (CAF > 1.0) error accumulation. Two adjacent AI nodes (documentation + coding) yield CAF = 0.503 [0.458, 0.540], indicating partial error cancellation. Five AI nodes without feedback yield CAF = 1.009 [0.925, 1.099], indicating near-additive accumulation. Five AI nodes with a referral-to-scheduling feedback loop yield CAF = 2.245 [2.032, 2.486], exceeding the pre-specified clinical relevance threshold (CAF = 1.2) by a factor of 1.87. Care-related harm dominates the four-dimension harm taxonomy under full pipeline deployment (share ≈ 61%, driven by intervention deficits). We frame the transition through Perrow's Normal Accidents Theory: the feedback loop converts a linearly interactive system into a complexly interactive one. The regulatory implication is that governing individual tools is necessary but insufficient; the feedback architecture of the deployed pipeline is the decisive governance target. Govern the loop, not the tool.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.560 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.451 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.948 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.797 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.