Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Fairness in Language Models: A Tutorial
1
Zitationen
4
Autoren
2025
Jahr
Abstract
Language Models (LMs) achieve outstanding performance across diverse applications but often produce biased outcomes, raising concerns about their trustworthy deployment. These concerns call for fairness research specific to LMs; however, most existing work in machine learning assumes access to model internals or training data, conditions that rarely hold in practice. As LMs continue to exert growing societal influence, it becomes increasingly important to understand and address fairness challenges unique to these models. To this end, our tutorial begins by showcasing real-world examples of bias to highlight their practical implications and uncover underlying sources. We then define fairness concepts tailored to LMs, review methods for bias evaluation and mitigation, and present a multi-dimensional taxonomy of benchmark datasets for fairness assessment. We conclude by outlining open research challenges, aiming to provide the community with both conceptual clarity and practical tools for fostering fairness in LMs. All tutorial resources are publicly accessible at https://github.com/vanbanTruong/fairness-in-large-language-models.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.711 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.884 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.502 Zit.
Fairness through awareness
2012 · 3.301 Zit.
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
2018 · 3.192 Zit.