OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 26.03.2026, 23:41

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Human–AI interaction in a cancer-enriched double-reading breast screening cohort: diagnostic accuracy and second-reader behavior

2026·0 Zitationen·Cancer ImagingOpen Access
Volltext beim Verlag öffnen

0

Zitationen

10

Autoren

2026

Jahr

Abstract

To evaluate the impact of deploying AI as the first reader (R1) in a double-reading breast-screening workflow and to characterize second-reader (R2) behavior—including the effect of disclosing whether R1 was AI or human. This retrospective study used a cancer-enriched cohort of 220 women (95 cancers), with prevalence-weighted analyses performed to approximate population screening metrics. Five radiologists and one commercially available AI (Breast-SlimView®, Hera-MI) each served as R1; four radiologists served as R2. For each R2, cases were randomized 1:1 to AI-first versus human-first and, independently, to disclosure versus concealment of R1 identity. R2 could validate, dismiss, or add annotations. The primary endpoint was final decision correctness by breast. We used GEE logistic regression to estimate the overall effect of using AI as the first reader and to isolate second-reader behavior independently of first-reader accuracy. At the prespecified R1 operating point, AI had sensitivity/specificity/accuracy of 85.2%/79.5%/80.8% versus 84.3%/84.5%/85.0% for human R1s; crude final accuracy was lower for AI-first. At 0.6% prevalence, AI-first yielded higher recalls (20.8% vs. 16.8%) with slightly lower PPV (2.7% vs. 3.0%). Conditioning on R1 correctness, R2s were approximately twice more likely to overturn an incorrect AI-initiated opinion than an incorrect human-initiated one (OR ≈ 2.05, p < 0.001). Disclosure that R1 was AI increased R2 corrections (from 13.6% to 19.1%, p = 0.029). Thirteen AI-true-positive cues were dismissed by R2. At this operating point, AI-first reduced crude accuracy due to lower specificity, yet reader-behavior analyses indicate greater scrutiny of AI-initiated opinions. Protocol, threshold, and user-interface choices may raise specificity while preserving beneficial human–AI dynamics. In a double-reading breast screening simulated workflow with both human and AI as a first reader, AI-first final accuracy was lower than human-first (80.8% vs. 85.0%). At the prespecified R1 threshold (matching average human R1 sensitivity), AI had lower specificity, yielding more false positives. Second readers overturned AI-initiated errors more often than human-initiated (controlled-direct effect odd ratio ≈ 2.1). Disclosing AI identity increased R2 correction (from 13.6% to 19.1%; p = 0.029).

Ähnliche Arbeiten