Symbolic knowledge extraction from opaque ML predictors in PSyKE: Platform design & experiments

   page       BibTeX_logo.png   
Intelligenza Artificiale 16(1), pages 27–48
July 2022

A common practice in modern explainable AI is to post-hoc explain black-box machine learning (ML) predictors – such as neural networks – by extracting symbolic knowledge out of them, in the form of either rule lists or decision trees. By acting as a surrogate model, the extracted knowledge aims at revealing the inner working of the black box, thus enabling its inspection, representation, and explanation. Various knowledge-extraction algorithms have been presented in the literature so far. Unfortunately, running implementations of most of them are currently either proofs of concept or unavailable. In any case, a unified, coherent software framework supporting them all – as well as their interchange, comparison, and exploitation in arbitrary ML workflows – is currently missing. Accordingly, in this paper we discuss the design of PSyKE, a platform providing general-purpose support to symbolic knowledge extraction from different sorts of black-box predictors via many extraction algorithms. Notably, PSyKE targets symbolic knowledge in logic form, allowing the extraction of first-order logic clauses. The extracted knowledge is thus both machine- and human-interpretable, and can be used as a starting point for further symbolic processing—e.g. automated reasoning.

keywordsExplainable AI, knowledge extraction, interpretable prediction, PSyKE
origin event
journal or series
book Intelligenza Artificiale (IA)
funding project
wrenchStairwAI — Stairway to AI: Ease the Engagement of Low-Tech users to the AI-on-Demand platform through AI (01/01/2021–31/12/2023)
wrenchEXPECTATION — Personalized Explainable Artificial Intelligence for decentralized agents with heterogeneous knowledge (01/04/2021–31/03/2024)