Abduction-Based Explanations for ML Models


Abduction-Based Explanations for ML Models

in-depth analysis project

Author

Abstract

Abductive Reasoning is a form of logical inference: it starts with an observation or set of observations and then seeks the simplest and most likely conclusion.
Abduction has already been studied in the field of XAI, the understandable Artificial Intelligence, as an approach to explain the machine learning prediction of samples from a data set by generating subset-minimal or cardinality-minimal explanations with respect to features.
This work is born as a reproduction of the study Abduction-Based Explanations for Machine Learning Models, curated by Alexey Ignatiev, Nina Narodytska and Joao Marques-Silva, and it’s focused on building such an abductive model agnostic strategy based on two algorithms in order to obtain explanations for neural networks (NNs).
The experimental results, on well-known datasets, validate the scalability of the proposed approach as well as the quality of the computed solutions.

Outcomes

    

Course

— a.y.

2021/2022

— credits

6

— cycle

2nd cycle

— language

wgb.gif

teachers

— professor

Andrea Omicini

— other professors

Giovanni Ciatto

context

— university

Alma Mater Studiorum-Università di Bologna

— campus

Cesena

— department / faculty / school

DISI

— 2nd cycle

8614 Ingegneria e scienze informatiche 

URLs & IDs

— course ID

93669

related courses

— components

Intelligent Systems Engineering (Module 1) (2nd Cycle, 2021/2022) — Andrea Omicini    Intelligent Systems Engineering (Module 2) (2nd Cycle, 2021/2022) — Andrea Omicini    Intelligent Systems Engineering (Module 3) (2nd Cycle, 2021/2022) — Giovanni Ciatto

Partita IVA: 01131710376 — Copyright © 2008–2023 APICe@DISI – PRIVACY