Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Bias Discovery in Machine Learning Models for Mental Health
24
Zitationen
5
Autoren
2022
Jahr
Abstract
Fairness and bias are crucial concepts in artificial intelligence, yet they are relatively ignored in machine learning applications in clinical psychiatry. We computed fairness metrics and present bias mitigation strategies using a model trained on clinical mental health data. We collected structured data related to the admission, diagnosis, and treatment of patients in the psychiatry department of the University Medical Center Utrecht. We trained a machine learning model to predict future administrations of benzodiazepines on the basis of past data. We found that gender plays an unexpected role in the predictions—this constitutes bias. Using the AI Fairness 360 package, we implemented reweighing and discrimination-aware regularization as bias mitigation strategies, and we explored their implications for model performance. This is the first application of bias exploration and mitigation in a machine learning model trained on real clinical psychiatry data.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.781 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.893 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.539 Zit.
Fairness through awareness
2012 · 3.309 Zit.
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
2018 · 3.254 Zit.