Davide Calvaresi, Amro Najjar, Andrea Omicini, Reyhan Aydoǧan, Rachele Carli, Giovanni Ciatto, Yazan Mualla, Kary Främling (eds.)
Explainable and Transparent AI and Multi-Agent Systems, pages 116–129
Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence) 14127
Springer, London, UK
2023
Machine learning opaque models, currently exploited to carry out a wide variety of supervised and unsupervised learning tasks, are able to achieve impressive predictive performances. However, they act as black boxes (BBs) from the human standpoint, so they cannot be entirely trusted in critical applications unless there exists a method to extract symbolic and human-readable knowledge out of them.
In this paper we analyse a recurrent design adopted by symbolic knowledge extractors for BB predictors, that is, the creation of rules associated with hypercubic input space regions. We argue that this kind of partitioning may lead to suboptimum solutions when the data set at hand is sparse, high-dimensional, or does not satisfy symmetric constraints. We then propose two different knowledge-extraction workflows involving clustering approaches, highlighting the possibility to outperform existing knowledge-extraction techniques in terms of predictive performance on data sets of any kind.
keywords
Explainable artificial intelligence; Symbolic knowledge extraction; Clustering
origin event
journal or series
Lecture Notes in Computer Science
(LNCS)
container publication
funding project
TAILOR — Foundations of Trustworthy AI – Integrating Reasoning, Learning and Optimization
(01/09/2020–31/08/2024)
works as
reference publication for talk