FairLib: A Toolkit for Bias Analysis and Mitigation in AI Systems

   page       attach   

Automated decision-making systems are rapidly permeating socially sensitive domains such as finance, healthcare, justice, and autonomous mobility. While these data-driven solutions can increase efficiency, they can also perpetuate or amplify existing inequities whenever the underlying algorithms exhibit unfair behavior. This thesis provides a systematic investigation of algorithmic fairness, clarifying multiple, often competing, formal definitions adopted in the literature and mapping them to practical risks of bias and discrimination that arise throughout the machine-learning pipeline.

After surveying the main sources of bias—data imbalance, historical prejudice, model opacity, and feedback loops—the work reviews mitigation strategies grouped into three families: pre-processing (data-repair and re-sampling), in-processing (fairness-aware losses, constraints, regularizers), and post-processing (prediction-adjustment and explanation tools). Building upon these foundations, the thesis introduces FairLib: a modular, open-source library designed to address limitations in existing fairness toolkits by unifying bias-diagnosis metrics and mitigation algorithms behind a consistent API. FairLib is model-agnostic, integrates with popular ML frameworks, and facilitates reproducible experimentation through configurable pipelines.

A preliminary evaluation on canonical benchmark datasets shows that selected FairLib pipelines can reduce unfairness while leaving predictive accuracy broadly unchanged. Although limited to a modest set of benchmarks, these findings suggest that systematic fairness interventions are achievable without prohibitive trade-offs.

By coupling a critical analysis of fairness concepts with a practical, extensible toolkit, this thesis aims to foster greater transparency and accountability in AI systems and help practitioners deploy models that respect fundamental principles of equity.

keywordsbias mitigation,machine learning,fairlib,python,ai fairness