Enforcing Fairness via Constraint Injection with FaUCI

   page       BibTeX_logo.png       attach   
AEQUITAS 2024: Fairness and Bias in AI, pages 8:1–8:13
CEUR Workshop Proceedings 3808
CEUR-WS
October 2024

The problem of fairness in AI can be tackled by minimising bias in the data (pre-processing), in the algorithms (in-processing), or in the results (post-processing). In the particular case of in-processing applied to supervised machine learning, state-of-the-art solutions rely on a few well-known fairness metrics – e.g., demographic parity, disparate impact, or equalised odds – optimised during training—which, however, mostly focus on binary attributes and their effects on binary classification problems. Accordingly, in this work we propose FaUCI as a general purpose framework for injecting fairness constraints into neural networks (or, any model trained via stochastic gradient descent), supporting attributes of many sorts—there including binary, discrete, or continuous features. To evaluate its effectiveness and efficiency, we test FaUCI against several sorts of features and fairness metrics. Furthermore, we compare FaUCI with state-of-the-art solutions for in-processing, demonstrating its superiority.

keywordsAI Fairness, FaUCI, in-processing, regularization, mitigation
reference talk
page_white_powerpointEnforcing Fairness via Constraint Injection with FaUCI (20/10/2024) — Matteo Magnini (Matteo Magnini, Giovanni Ciatto, Roberta Calegari, Andrea Omicini)
origin event
journal or series
book CEUR Workshop Proceedings (CEUR-WS.org)
funding project
wrenchAEQUITAS — Assessment and Engineering of eQuitable, Unbiased, Impartial and Trustworthy Ai Systems (01/11/2022–31/10/2025)
works as
reference publication for talk
page_white_powerpointEnforcing Fairness via Constraint Injection with FaUCI (20/10/2024) — Matteo Magnini (Matteo Magnini, Giovanni Ciatto, Roberta Calegari, Andrea Omicini)