Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Bias Mitigation in Primary Healthcare Artificial Intelligence Models: A Scoping Review
0
Zitationen
8
Autoren
2024
Jahr
Abstract
<h3>Background:</h3> Artificial intelligence (AI) predictive models in primary healthcare can potentially lead to benefits for population health. Algorithms can identify more rapidly and accurately who should receive care and health services, but they could also perpetuate or exacerbate existing biases toward diverse groups. We noticed a gap in actual knowledge about which strategies are deployed to assess and mitigate bias toward diverse groups, based on their personal or protected attributes, in primary healthcare algorithms. <h3>Objectives:</h3> To identify and describe attempts, strategies, and methods to mitigate bias in primary healthcare artificial intelligence models, which diverse groups or protected attributes have been considered, and what are the results on bias attenuation and AI models performance. <h3>Methods:</h3> We conducted a scoping review informed by the Joanna Briggs Institute (JBI) review recommendations and an experienced librarian developed a search strategy. <h3>Results:</h3> After the removal of 585 duplicates, we screened 1018 titles and abstracts. Of the remaining 189 after exclusion, we excluded 172 full texts and included 17 studies. The most investigated personal or protected attributes were Race (or Ethnicity) in (12/17), and Sex, using binary “male vs female” in (10/17) of included studies. We grouped studies according to bias mitigation attempts in 1) existing AI models or datasets, 2) sourcing data such as Electronic Health Records, 3) developing tools with “human-in-the-loop” and 4) identifying ethical principles for informed decision-making. Mathematical and algorithmic preprocessing methods, such as changing data labeling and reweighing, and a natural language processing method using data extraction from unstructured notes, showed the greatest potential. Other processing methods, such as groups recalibration and equalized odds, exacerbated predictions errors between groups or resulted in overall models miscalibrations. <h3>Conclusions:</h3> Results suggests that biases toward diverse groups can be more easily mitigated when data are open-sourced, multiple stakeholders are involved, and at the algorithm’ preprocessing stage. Further empirical studies with more diverse groups considered, such as nonbinary gender identities or Indigenous peoples in Canada, are needed to confirm and to expand this knowledge.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.349 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.219 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.631 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.480 Zit.