Davide Calvaresi, Amro Najjar, Andrea Omicini, Reyhan Aydoǧan, Rachele Carli, Giovanni Ciatto, Yazan Mualla, Kary Främling (a cura di)
Explainable and Transparent AI and Multi-Agent Systems, pp. 116–129
Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence) 14127
Springer, London, UK
2023
Machine learning opaque models, currently exploited to carry out a wide variety of supervised and unsupervised learning tasks, are able to achieve impressive predictive performances. However, they act as black boxes (BBs) from the human standpoint, so they cannot be entirely trusted in critical applications unless there exists a method to extract symbolic and human-readable knowledge out of them.
In this paper we analyse a recurrent design adopted by symbolic knowledge extractors for BB predictors, that is, the creation of rules associated with hypercubic input space regions. We argue that this kind of partitioning may lead to suboptimum solutions when the data set at hand is sparse, high-dimensional, or does not satisfy symmetric constraints. We then propose two different knowledge-extraction workflows involving clustering approaches, highlighting the possibility to outperform existing knowledge-extraction techniques in terms of predictive performance on data sets of any kind.
parole chiave
Explainable artificial intelligence; Symbolic knowledge extraction; Clustering
evento origine
rivista o collana
Lecture Notes in Computer Science
(LNCS)
pubblicazione contenitore
progetto finanziatore
TAILOR — Foundations of Trustworthy AI – Integrating Reasoning, Learning and Optimization
(01/09/2020–31/08/2024)
funge da
pubblicazione di riferimento per presentazione