abstract
Abductive Reasoning is a form of logical inference: it starts with an observation or set of observations and then seeks the simplest and most likely conclusion.
Abduction has already been studied in the field of XAI, the understandable Artificial Intelligence, as an approach to explain the machine learning prediction of samples from a data set by generating subset-minimal or cardinality-minimal explanations with respect to features.
This work is born as a reproduction of the study Abduction-Based Explanations for Machine Learning Models, curated by Alexey Ignatiev, Nina Narodytska and Joao Marques-Silva, and it’s focused on building such an abductive model agnostic strategy based on two algorithms in order to obtain explanations for neural networks (NNs).
The experimental results, on well-known datasets, validate the scalability of the proposed approach as well as the quality of the computed solutions.
outcomes