Unveiling Opaque Predictors via Explainable Clustering: The CReEPy Algorithm

   page       BibTeX_logo.png   
Proceedings of the 2nd Workshop on Bias, Ethical AI, Explainability and the role of Logic and Logic Programming, BEWARE 2023 co-located with the 22nd International Conference of the Italian Association for Artificial Intelligence (AI*IA 2023)
AI*IA Series 3615

Machine learning black boxes, as deep neural networks, are often hard to explain because their predictions depend on complicated relationships involving a huge amount of internal parameters and, possibly, input features. This opaqueness from the human perspective makes their predictions not trustable, especially in critical applications. In this paper we tackle this issue by introducing the design and implementation of CReEPy, an algorithm performing symbolic knowledge extraction based on explainable clustering. In particular, CReEPy relies on the underlying clustering performed by the ExACT procedure to provide human-interpretable Prolog rules mimicking the behaviour of the opaque model. Experiments to assess both the human readability and the predictive performance of the proposed algorithm are discussed here, using existing state-of-the-art techniques as benchmarks for the comparison.

keywordsExplainable clustering, Explainable artificial intelligence, Symbolic knowledge extraction, PSyKE
funding project
wrenchAEQUITAS — Assessment and Engineering of eQuitable, Unbiased, Impartial and Trustworthy Ai Systems (01/11/2022–31/10/2025)