Federico Sabbatini, Giovanni Ciatto, Andrea Omicini
Davide Calvaresi, Amro Najjar, Michael Winikoff, Kary Främling (eds.)
Explainable and Transparent AI and Multi-Agent Systems. Third International Workshop, EXTRAAMAS 2021, Virtual Event, May 3–7, 2021, Revised Selected Papers, pages 18-38
Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence) 12688
Springer, Cham, Switzewland
July 2021
Knowledge-extraction methods are applied to ML-based predictors to attain explainable representations of their operation when the lack of interpretable results constitutes a problem. Several algorithms have been proposed for knowledge extraction, mostly focusing on the extraction of either lists or trees of rules. Yet, most of them only support supervised learning – and, in particular, classification – tasks. Iter is among the few rule-extraction methods capable of extracting symbolic rules out of sub-symbolic regressors. However, its performance – here intended as the interpretability of the rules it extracts – easily degrades as the complexity of the regression task at hand increases. In this paper we propose GridEx, an extension of the Iter algorithm, aimed at extracting symbolic knowledge – in the form of lists of if-then-else rules – from any sort of sub-symbolic regressor—there including neural networks of arbitrary depth. With respect to Iter, GridEx produces shorter rule lists retaining higher fidelity w.r.t. the original regressor. We report several experiments assessing GridEx performance against Iter and Cart (i.e., decision-tree regressors) used as benchmarks. |
(keywords) Explainable AI; Knowledge extraction; Interpretable prediction; Regression; Iter; GridEx |
- EXplainable and TRAnsparent AI and Multi-Agent Systems: Third International Workshop (EXTRAAMAS 2021) — 03/05/2021–04/05/2021