Assessing and Enforcing Fairness in the AI Lifecycle

A significant challenge in detecting and mitigating bias is creating a mindset amongst AI developers to address unfairness.
The current literature on fairness is broad, and the learning curve to distinguish where to use existing metrics and techniques for bias detection or mitigation is difficult.
This survey systematises the state-of-the-art about distinct notions of fairness and relative techniques for bias mitigation according to the AI lifecycle.
Gaps and challenges identified during the development of this work are also discussed.

pubblicazione di riferimento
page_white_acrobatAssessing and Enforcing Fairness in the AI Lifecycle (articolo in atti, 2023) — Roberta Calegari, Gabriel G. Castañé, Michela Milano, Barry O’Sullivan
progetto finanziatore
wrenchAEQUITAS — Assessment and Engineering of eQuitable, Unbiased, Impartial and Trustworthy Ai Systems (01/11/2022–31/10/2025)