|
|
Fairness has become a crucial issue in the development of AI systems, especially in the context of ML. ML models are often trained on datasets that are biased towards certain groups of individuals, and the bias impacts on the predictions of the model. This can lead to unfair outcomes, which may be detrimental to the individuals from the disadvantaged groups—in terms of gender, ethnicity, age, religion, political views, etc.
In the literature there are several techniques whose purpose is to reduce the unfairness of ML algorithm. A not exhaustive list is:
- FaUCI
- FNNC
- Cho et al.
- Wagner et al.
- Jiang et al.
With this work we want to analyse how these methods perform with different batch size (BS). BS is a paramount hyper-parameter for fairness method because they all work by estimating distribution from the BS. The bigger the BS the more accurate the distributions are but the slower the method. We want to analyse the techniques also using multiple datasets and different types of data (e.g., binary, categoric, numeric).
parole chiave
fairness, FaUCI