Achieving Fairness without Demographic Information

Alessandro Pasi  •  Matteo Belletti  •  Razvan Ciprian Stricescu
abstract

In much of the existing machine learning (ML) fairness research, pro- tected features like race and sex are typically included in datasets and used to address fairness concerns. However, due to privacy concerns and regulatory restrictions, collecting or using these features for training or in- ference is often not feasible. This raises the question: how can we train an ML model to be fair without knowing the protected group memberships? This work tackles this issue by proposing Adversarially Reweighted Learn- ing (ARL). The key idea is that non-protected features and task labels can help identify fairness problems. ARL uses these features to co-train an adversarial reweighting approach that enhances fairness. The results indicate that ARL improves Rawlsian Max-Min fairness and achieves sig- nificant AUC improvements for the worst-case protected groups across various datasets, outperforming current leading methods.

outcomes