Mitigating Intersectional Fairness: a Practical Approach with FaUCI

   page       attach   

Despite many ongoing efforts, the current landscape of fairness-aware AI remains insufficiently equipped to navigate the complexity of real-world scenarios. In particular, in the context of machine learning (ML) algorithms, the vast majority of existing approaches to fairness focus only on a single sensitive attribute, which is inadequate for addressing the complexity of real-world scenarios often involving multiple sensitive attributes, intersecting in complex ways. In this paper, we propose a novel approach to intersectional fairness, aiming at mitigating the inherent biases in ML models associated with multiple interconnected sensitive attributes. Our approach is versatile and practical, since information it can be applied to any gradient-based machine learning model, (ii) it can take into account any number of sensitive attributes, (iii) it can be used to optimise any fairness metric, and (iv) it is computationally linear in the number of sensitive attributes. The effectiveness of our approach is demonstrated through its application to two real-world datasets.