Addressing Fairness in AI Systems: Design and Development of a Pragmatic (Meta-)Methodology

   page       attach   

Biases and discriminations are present in several Artificial Intelligence (AI) systems as much as they are rooted in the society. Fairness in AI refers to the development of software systems that do not exhibit biases or systematic discriminations against specific individuals or groups. Addressing fairness is particularly challenging because it requires balancing ethical, social, legal, and technical expertises. This thesis proposes a meta-methodology for building fair AI systems, offering both a conceptual framework and a concrete software tool implementing the methodology. Instead of a single solution for all kinds of AI systems, this meta-methodology provides a flexible, adaptable approach that can be tailored to different domains and cultural contexts. The methodology is based on a Question–Answering mechanism, which guides the users through a structured flow of questions and answers, automating – behind the scenes – technical steps to build eventually a fair AI system. By leveraging a questionnaire, the system gathers contextual and domain-specific information, applying related socio-legal constraints to ensure fairness. This form of interaction allows making well-informed decisions, even without deep technical knowledge, consequently increasing the fairness problem awareness. The proposed approach is easily adaptable and evolvable, in order
to keep up with the changes in the domain of the system under design, and to
refine the methodology over time.