OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 31.03.2026, 18:23

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Learning fair representations via an adversarial framework

2023·41 Zitationen·AI OpenOpen Access
Volltext beim Verlag öffnen

41

Zitationen

7

Autoren

2023

Jahr

Abstract

Fairness has become a central issue for our research community as classification algorithms are adopted in societally critical domains such as recidivism prediction and loan approval. In this work, we consider the potential bias based on protected attributes (e.g., race and gender), and tackle this problem by learning latent representations of individuals that are statistically indistinguishable between protected groups while sufficiently preserving other information for classification.To do that, we develop a minimax adversarial framework with a generator to capture the data distribution and generate latent representations, and a critic to ensure that the distributions across different protected groups are similar. Our framework provides theoretical guarantee with respect statistical parity and individual fairness. Empirical results on four real-world datasets also show that the learned representation can effectively be used for classification tasks such as credit risk prediction while obstructing information related to protected groups, especially when removing protected attributes is not sufficient for fair classification.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen