2023
Machine learning black boxes, as deep neural networks, are often hard to explain because their predictions depend on complicated relationships involving a huge amount of internal parameters and, possibly, input features. This opaqueness from the human perspective makes their predictions not trustable, especially in critical applications. In this paper we tackle this issue by introducing the design and implementation of CReEPy, an algorithm performing symbolic knowledge extraction based on explainable clustering. In particular, CReEPy relies on the underlying clustering performed by the ExACT procedure to provide human-interpretable Prolog rules mimicking the behaviour of the opaque model. Experiments to assess both the human readability and the predictive performance of the proposed algorithm are discussed here, using existing state-of-the-art techniques as benchmarks for the comparison.
keywords
Explainable clustering, Explainable artificial intelligence, Symbolic knowledge extraction, PSyKE
funding project
AEQUITAS — Assessment and Engineering of eQuitable, Unbiased, Impartial and Trustworthy Ai Systems
(01/11/2022–31/10/2025)