|
|
This work provides a methodological approach to address fairness and bias mitigation in the design and development of data-driven methods. A central focus is the proposal and implementation of an innovative Fair-by-Design workflow, integrating various strategies for bias mitigation within data, well-known algorithms and proposed ones, and decision-making processes. The study adopts a broad perspective on one datasets adopting several algorithms, aiming to establish equitable and unbiased applications of data-driven algorithms across various domains. The primary objective is to ensure the general, equitable, and unbiased application of data-driven algorithms. The methodology systematically evaluates multiple bias mitigation strategies, with a critical emphasis on comparing their impact on the predictive accuracy of the algorithms. This approach yields practical insights into the trade-offs between fairness and accuracy, illustrating how different approaches can lead to varying accuracy scores on the same dataset and with the same models. This thesis significantly contributes to the ongoing discourse on fairness in machine learning and data-driven decision-making. The results offer guidance to stakeholders across sectors, aiding them in making informed decisions about algorithm deployment to promote fairness and minimize bias.