Ángel S. Marrero, Gustavo A. Marrero, Carlos Bethencourt, Liam James,
Roberta Calegari
Proceedings of the 2nd Workshop on Fairness and Bias in AI co-located with 27th European Conference on Artificial Intelligence (ECAI 2024)
CEUR Workshop Proceedings 3808
October 2024
This study focuses on predicting students’ academic performance, examining how AI predictive models often
reflect socioeconomic inequalities influenced by factors such as parental socioeconomic status and home environ-
ment, which affect the fairness of predictions. We compare three AI models aimed at performing an ablation study
to understand how these sensitive features (referred to as circumstances) influence predictions. Our findings
reveal biases in predictions that favor advantaged groups, depending on whether the goal is to identify excellence or underperformance. Additionally, a two-stage estimation procedure is proposed in the third model to mitigate the impact of sensitive features on predictions, thereby offering a model that can be considered fair with respect to inequality of opportunity.
keywords
AI-fairness, socioeconomic equality of opportunity, AI-ethics
origin event
journal or series
funding project
AEQUITAS — Assessment and Engineering of eQuitable, Unbiased, Impartial and Trustworthy Ai Systems
(01/11/2022–31/10/2025)