Fairness in AutoML

Tara Sabooni  •  Yasaman Samadzadeh
abstract

This project investigates the integration of individual fairness metrics into Automated Ma-
chine Learning (AutoML) pipelines, with the goal of aligning model selection and evaluation
processes with ethical and socially responsible AI practices. Leveraging IBM’s inFairness li-
brary, we design a training pipeline that evaluates models not only based on predictive accuracy
but also on individual fairness criteria—specifically, the spouse consistency metric. By embed-
ding fairness evaluation within the AutoML loop, the project aims to contribute toward the
development of more accountable and equitable AI system

outcomes