|
|
Fairness has become a crucial issue in the development of AI systems, especially in the context of ML. ML models are often trained on datasets that are biased towards certain groups of individuals, and the bias impacts on the predictions of the model. This can lead to unfair outcomes, which may be detrimental to the individuals from the disadvantaged groups—in terms of gender, ethnicity, age, religion, political views, etc.
Intersectionality is a framework that analyzes how interlocking systems of power and oppression affect individuals along
overlapping dimensions including race, gender, sexual orientation, class, etc. Intersectionality theory therefore implies it is important that fairness in artificial intelligence systems be protected with regard to multi-dimensional protected attributes (see full paper).
In the literature there are several techniques whose purpose is to reduce the unfairness of ML algorithm. A not exhaustive list is of methods that affect the training of a NN is the following:
- FaUCI
- FNNC
- Cho et al.
- Wagner et al.
- Jiang et al.
The goal of this thesis is to use the first method FaUCI, and possibly some of the others, to achieve intersectional fairness when using NNs in real-world datasets.
parole chiave
fairness, intersectionality, FaUCI