Symbolic Knowledge Extraction from Opaque ML Predictors in PSyKE: Platform Design & Experiments


Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, Andrea Omicini

Intelligenza Artificiale,  2022

A common practice in modern explainable AI is to post-hoc explain black-box machine learning (ML) predictors – such as neural networks – by extracting symbolic knowledge out of them, in the form of either rule lists or decision trees. By acting as a surrogate model, the extracted knowledge aims at revealing the inner working of the black box, thus enabling its inspection, representation, and explanation.
Various knowledge-extraction algorithms have been presented in the literature so far. Unfortunately, running implementations of most of them are currently either proofs of concept or unavailable. In any case, a unified, coherent software framework supporting them all – as well as their interchange, comparison, and exploitation in arbitrary ML workflows – is currently missing.
Accordingly, in this paper we discuss the design of PSyKE, a platform providing general-purpose support to symbolic knowledge extraction from different sorts of black-box predictors via many extraction algorithms. Notably, PSyKE targets symbolic knowledge in logic form, allowing the extraction of first-order logic clauses. The extracted knowledge is thus both machine- and
human-interpretable, and can be used as a starting point for further symbolic processing—e.g. automated reasoning.

(keywords) explainable AI, knowledge extraction, interpretable prediction, PSyKE

Journals & Series

Publication

— authors

Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, Andrea Omicini

— status

proof

— sort

article in journal

Venue

— journal

Intelligenza Artificiale

— publication date

2022

Partita IVA: 01131710376 - Copyright © 2008-2022 APICe@DISI Research Group - PRIVACY