Gabriele Kern-Isberner, Gerhard Lakemeyer, Thomas Meyer (eds.)
19th International Conference on Principles of Knowledge Representation and Reasoning, pages 554–563
IJCAI Organization
August 2022
Procedures aimed at explaining outcomes and behaviour of opaque predictors are becoming more and more essential as machine learning (ML) black-box (BB) models pervade a wide variety of fields and, in particular, critical ones – e.g., medical or financial –, where it is not possible to make decisions on the basis of a blind automatic prediction. A growing number of methods designed to overcome this BB limitation
is present in the literature, however some ML tasks are nearly or completely neglected—e.g., regression and clustering. Furthermore, existing techniques may be not applicable in complex real-world scenarios or they can affect the output predictions with undesired artefacts.
In this paper we present the design and the implementation of GridREx, a pedagogical algorithm to extract knowledge from black-box regressors, along with PEDRO, an optimisation procedure to automate the GridREx hyper-parameter tuning phase with better results than manual tuning. We also report the results of our experiments involving the application of GridREx and PEDRO in real case scenarios, including GridREx performance assessment by using as benchmarks other similar state-of-the-art techniques. GridREx proved to be able to give more concise explanations with higher fidelity and predictive capabilities.
keywords
Explainable AI, Integrating knowledge representation and machine learning, Integrating symbolic and sub-symbolic approaches, Applications of KR
reference talk
origin event
funding project
StairwAI — Stairway to AI: Ease the Engagement of Low-Tech users to the AI-on-Demand platform through AI
(01/01/2021–31/12/2023)
TAILOR — Foundations of Trustworthy AI – Integrating Reasoning, Learning and Optimization
(01/09/2020–31/08/2024)
works as
reference publication for talk