Interpretable Narrative Explanation for ML Predictors with LP: A Case Study for XAI


Roberta Calegari, Giovanni Ciatto, Jason Dellaluce, Andrea Omicini

In the era of digital revolution, individual lives are going to cross and interconnect ubiquitous online domains and offline reality based on smart technologies—discovering, storing, processing, learning, analysing, and predicting from huge amounts of environment-collected data. Sub-symbolic techniques, such as deep learning, play a key role there, yet they are often built as black boxes, which are not inspectable, interpretable, explainable. New research efforts towards explainable artificial intelligence (XAI) are trying to address those issues, with the final purpose of building understandable, accountable, and trustable AI systems—still, seemingly with a long way to go. Generally speaking, while we fully understand and appreciate the power of sub-symbolic approaches, we believe that symbolic approaches to machine intelligence, once properly combined with sub-symbolic ones, have a critical role to play in order to achieve key properties of XAI such as observability, interpretability, explainability, accountability, and trustability. In this paper we describe an example of integration of symbolic and sub-symbolic techniques. First, we sketch a general framework where symbolic and sub-symbolic approaches could fruitfully combine to produce intelligent behaviour in AI applications. Then, we focus in particular on the goal of building a narrative explanation for ML predictors: to this end, we exploit the logical knowledge obtained translating decision tree predictors into logical programs.

(keywords) XAI, logic programming, machine learning, symbolic vs. subsymbolic

WOA 2019 – 20th Workshop “From Objects to Agents”, cap. 16, CEUR Workshop Proceedings (AI*IA Series) 2404, pp. 105-112, July 2019.
Federico Bergenti, Stefania Monica (a cura di), Sun SITE Central Europe, RWTH Aachen University.

@incollection{xailp-woa2019,
Author = {Calegari, Roberta and Ciatto, Giovanni and Dellaluce, Jason and Omicini, Andrea},
Booktitle = {WOA 2019 -- 20th Workshop ``From Objects to Agents''},
Editor = {Bergenti, Federico and Monica, Stefania},
       Keywords = {XAI, logic programming, machine learning, symbolic vs. subsymbolic},
IrisId = {11585/692870},
Location = {Parma, Italy},
Month = {26--28~} # jun,
Pages = {105--112},
Publisher = {Sun SITE Central Europe, RWTH Aachen University},
ScopusId = {2-s2.0-85069688451},
Series = {CEUR Workshop Proceedings},
Subseries = {AI*IA Series},
Url = {http://ceur-ws.org/Vol-2404/paper16.pdf},
Title = {Interpretable Narrative Explanation for {ML} Predictors with {LP}: A Case Study for {XAI}},
Volume = 2404,
Year = 2019}

Riviste & collane

Eventi

Pubblicazione

— autori/autrici

Roberta Calegari, Giovanni Ciatto, Jason Dellaluce, Andrea Omicini

— a cura di

Federico Bergenti, Stefania Monica

— stato

pubblicato

— tipo

Sede di pubblicazione

— volume

WOA 2019 – 20th Workshop “From Objects to Agents”

— collana

CEUR Workshop Proceedings 2404

— data di pubblicazione

July 2019

— capitolo n.

16

— pagine

105-112

— collana

CEUR Workshop Proceedings 2404

— data di pubblicazione

July 2019

URL & ID

pagina originale

— DBLP

conf/woa/CalegariCDO19

— IRIS

11585/692870

— Scopus

2-s2.0-85069688451

BibTeX

— BibTeX ID
xailp-woa2019
— BibTeX category
incollection

Partita IVA: 01131710376 - Copyright © 2008-2021 APICe@DISI Research Group - PRIVACY