Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI is Smart. Is it Wise? Quantifying the Effect of Patient Choice (β) on Physical Outcomes
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Large language models (LLMs) increasingly guide clinical decisions through population-level evidence, yet they cannot encode individual patient preferences. When treatments yield comparable outcomes, patient choice may drive decisions, though its effect remains unquantified. The Spine Patient Outcomes Research Trial (SPORT), marked by similar surgical and nonoperative results and substantial crossover, provided a quasi-experimental structure to estimate unbiased treatment effects and quantify the contribution of patient choice to outcomes. Using only published aggregate results from SPORT, we conducted two-stage least squares instrumental-variable analysis with randomized assignment as the instrument, estimating Complier Average Causal Effects and assessing sensitivity with E-values. Primary outcomes were SF-36 Bodily Pain, SF-36 Physical Function, and the Oswestry Disability Index. We decomposed treatment effects into alpha (the biological treatment mechanism) and beta (the patient-choice contribution). Aggregate estimates showed alpha = 15.7 (0.5) and beta = 7.4 (3.4), with the difference in alpha between surgical and nonoperative care approximately 0.65. This analysis quantifies a measurable and significant effect of patient choice (beta) on physical outcomes. When treatment effects are comparable (difference in alpha small), beta - a dimension inaccessible to current LLMs trained on alpha-biased population-level evidence - becomes the dominant driver of decision-making. These findings provide empirical grounding for informed choice, clarify limits of LLMs trained on alpha-biased evidence, and quantify a structural constraint in AI-driven clinical decision support.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.393 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.502 Zit.