OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 26.03.2026, 05:20

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Artificial Intelligence in the ICU:from predictive to actionable AI

2026·0 Zitationen·EUR Research Repository (Erasmus University Rotterdam)
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

With artificial intelligence (AI) gaining traction across various fields, including the data<br/>rich intensive care unit (ICU), research has largely focused on developing predictive models (from linear regression to deep learning) to forecast outcomes like mortality or sepsis.<br/>However, another key aspect of AI is predicting how different actions affect patient out<br/>comes, a task known as causal inference, which is essential to realize AI‑assisted decision<br/>making in the ICU. To emphasize its significance, we propose to refer to any data‑driven<br/>model used for causal inference tasks as ‘actionable AI’, as opposed to ‘predictive AI’. <br/><br/>Predictive AI models rely solely on associations and cannot determine how a patient’s outcome might change with a specific intervention. For AI to assist ICU physicians in treatment decisions, i.e., ‘actionable AI’, it must account for cause and effect. Actionable AI aims to predict the difference between risks that would result from certain treatmentdecisions. Using these predictions, it could recommend the treatment most likely to lead to the best outcome. In medicine, causal inference tasks are traditionally achieved through randomized controlled trials (RCTs), where treatment randomization allows outcome differences to be interpreted as causal effects. Hence, one can simply compare outcomes and conclude that the one with the best observed outcome represents the optimal treatment. A causal inference task using observational data can be understood as an attempt to replicate the RCT that would ideally address the research question (ie, the ‘target trial’)—known as target trial emulation. However, using observational data, causal inference tasks are more complex, often compounded by bias stemming from common causes (confounding bias) and selection on common effects (selection bias). For AI to learn causal inference from observational data, it must adjust for these biases. ICU treatments involve a sequence of decisions, meaning patients are treated according to a specific regime (or policy) that guides treatment choices based on their response. When multiple decisions occur over time, common causes (ie, confounders) may change and even be influenced by prior treatments, leading to ‘time‑varying confounding.’ Adjusting for this requires more advanced methods than when only one point in time is considered.<br/><br/>

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationSepsis Diagnosis and TreatmentMachine Learning in Healthcare
Volltext beim Verlag öffnen