Roberta Calegari, Giovanni Ciatto, Enrico Denti, Andrea Omicini, Giovanni Sartor (eds.)
WOA 2021 – 22nd Workshop “From Objects to Agents”, pages 29–48
CEUR Workshop Proceedings (AI*IA Series) 2963
Sun SITE Central Europe, RWTH Aachen University
October 2021
A common practice in modern explainable AI is to post-hoc explain black-box machine learning (ML) predictors – such as neural networks – by extracting symbolic knowledge out of them, in the form of either rule lists or decision trees. By acting as a surrogate model, the extracted knowledge aims at revealing the inner working of the black box, thus enabling its inspection, representation, and explanation.
Various knowledge-extraction algorithms have been presented in the literature so far. Unfortunately, running implementations of most of them are currently either proof of concepts or unavailable. In any case, a unified, coherent software framework supporting them all – as well as their interchange, comparison, and exploitation in arbitrary ML workflows – is currently missing.
Accordingly, in this paper we present the design of PSyKE, a platform providing general-purpose support to symbolic knowledge extraction from different sorts of black-box predictors via many extraction algorithms. Notably, PSyKE targets the extraction of symbolic knowledge in logic form, making it possible to extract first-order logic clauses as output. The extracted knowledge is thus both machine- and human- interpretable, and it can be used as a starting point for further symbolic processing—e.g. automated reasoning.
keywords
explainable AI, knowledge extraction, interpretable prediction, PSyKE
reference talk
origin event
container publication
funding project
EXPECTATION — Personalized Explainable Artificial Intelligence for decentralized agents with heterogeneous knowledge
(01/04/2021–31/03/2024)
StairwAI — Stairway to AI: Ease the Engagement of Low-Tech users to the AI-on-Demand platform through AI
(01/01/2021–31/12/2023)
works as
reference publication for talk