Evaluation Metrics for Symbolic Knowledge Extracted from Machine Learning Black Boxes: A Discussion Paper


Federico Sabbatini, Roberta Calegari

XAI-FIN-2022 - Second Workshop on Explainable AI in Finance @ICAIF 2022
2 November 2022

As opaque decision systems are being increasingly adopted in almost any application field, issues about their lack of transparency and human readability are a concrete concern for end-users. Amongst existing proposals to associate human-interpretable knowledge with accurate predictions provided by opaque models, there are rule extraction techniques, capable of extracting symbolic knowledge out of an opaque model. However, how to assess the level of readability of the extracted knowledge quantitatively is still an open issue. Finding such a metric would be the key, for instance, to enable automatic comparison between a set of different knowledge representations, paving the way for the development of parameter autotuning algorithms for knowledge extractors. In this paper we discuss the need for such a metric as well as the criticalities of readability assessment and evaluation, taking into account the most common knowledge representations while highlighting the most puzzling issues.

(keywords) Explainable artificial intelligence; Symbolic knowledge extraction; Readability metrics; AutoML

Publication

— authors

— status

published

— sort

paper in proceedings

— publication date

2 November 2022

— volume

XAI-FIN-2022 - Second Workshop on Explainable AI in Finance @ICAIF 2022

— venue

— address

New York, NY, USA

URLs

original page  |  open access PDF

identifiers

— DOI

10.48550/arXiv.2211.00238

— Scholar

5021962022669458541

— Semantic Scholar

253244189

files

Open Access PDF

Partita IVA: 01131710376 — Copyright © 2008–2023 APICe@DISI – PRIVACY