2024
Machine learning black boxes, exemplified by deep neural networks, often exhibit challenges in interpretability
due to their reliance on complicated relationships involving numerous internal parameters and input features. This lack of
transparency from a human perspective renders their predictions untrustworthy, particularly in critical applications. In this
paper, we address this issue by introducing the design and implementation of CReEPy, an algorithm for symbolic knowledge
extraction based on explainable clustering. Specifically, CReEPy leverages the underlying clustering performed by the
ExACT or CREAM algorithms to generate human-interpretable Prolog rules that mimic the behaviour of opaque models.
Additionally, we introduce CRASH, an algorithm for the automated tuning of hyper-parameters required by CReEPy. We
present experiments evaluating both the human readability and predictive performance of the proposed knowledge-extraction
algorithm, employing existing state-of-the-art techniques as benchmarks for comparison in real-world applications.
keywords
Explainable clustering, explainable artificial intelligence, symbolic knowledge extraction, PSyKE
journal or series
Intelligenza Artificiale
(IA)
funding project
AEQUITAS — Assessment and Engineering of eQuitable, Unbiased, Impartial and Trustworthy Ai Systems
(01/11/2022–31/10/2025)
TAILOR — Foundations of Trustworthy AI – Integrating Reasoning, Learning and Optimization
(01/09/2020–31/08/2024)