Proceedings of the 2nd Workshop on Fairness and Bias in AI co-located with 27th European Conference on Artificial Intelligence (ECAI 2024)
CEUR Workshop Proceedings 3808
Sun SITE Central Europe, RWTH Aachen University
October 2024
This work explores the efficacy of symbolic knowledge-extraction (SKE) techniques in identifying biases
and unfairness within opaque predictive models. Logic rules extracted from black-box predictors make it
possible to verify if decisions are influenced by protected or sensitive features. In particular, the identifi-
cation of biased or unfair decisions can be achieved through the evaluation of if-then rules, detecting
the inclusion of protected and/or sensitive information in the rules’ precondition. The effectiveness of
SKE in this regard is demonstrated here by conducting various simulations on a well-known data set
for loan grant prediction. Our findings highlight the potential of SKE as a valuable tool to reveal biases
and discrimination in opaque predictions, ultimately contributing to the pursuit of fair and transparent
decision-making systems.
keywords
Fairness in AI, Bias in AI, Explainable artificial intelligence, XAI, Symbolic knowledge extraction, PSyKE
origin event
funding project
AEQUITAS — Assessment and Engineering of eQuitable, Unbiased, Impartial and Trustworthy Ai Systems
(01/11/2022–31/10/2025)
TAILOR — Foundations of Trustworthy AI – Integrating Reasoning, Learning and Optimization
(01/09/2020–31/08/2024)