Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial intelligence chain-of-thought reasoning in nuanced medical scenarios: mitigation of cognitive biases through model intransigence
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Some biases persist in chain-of-thought reasoning LLMs, and models tend to produce intransigent recommendations. These findings highlight the role of clinicians to think broadly, respect diversity and remain vigilant when interpreting chain-of-thought reasoning artificial intelligence LLMs in nuanced medical decisions for patients.
Ähnliche Arbeiten
The Strengths and Difficulties Questionnaire: A Research Note
1997 · 14.598 Zit.
Making sense of Cronbach's alpha
2011 · 13.836 Zit.
QUADAS-2: A Revised Tool for the Quality Assessment of Diagnostic Accuracy Studies
2011 · 13.641 Zit.
A method for estimating the probability of adverse drug reactions
1981 · 11.484 Zit.
Evidence-Based Medicine
1992 · 4.153 Zit.