- Publications
- Symbolic knowledge extraction from opaque ML predictors in PSyKE: Platform design & experiments
Symbolic knowledge extraction from opaque ML predictors in PSyKE: Platform design & experiments
- Manage
- Copy
- Actions
- Export
- Annotate
- Print Preview
Choose the export format from the list below:
- Office Formats (1)
-
Export as Portable Document Format (PDF) using Apache Formatting Objects Processor (FOP)
-
- Other Formats (1)
-
Export as HyperText Markup Language (HTML)
-
Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, Andrea Omicini
Intelligenza Artificiale 16(1), pages 27–48
July 2022
A common practice in modern explainable AI is to post-hoc explain black-box machine learning (ML) predictors – such as neural networks – by extracting symbolic knowledge out of them, in the form of either rule lists or decision trees. By acting as a surrogate model, the extracted knowledge aims at revealing the inner working of the black box, thus enabling its inspection, representation, and explanation. Various knowledge-extraction algorithms have been presented in the literature so far. Unfortunately, running implementations of most of them are currently either proofs of concept or unavailable. In any case, a unified, coherent software framework supporting them all – as well as their interchange, comparison, and exploitation in arbitrary ML workflows – is currently missing. Accordingly, in this paper we discuss the design of PSyKE, a platform providing general-purpose support to symbolic knowledge extraction from different sorts of black-box predictors via many extraction algorithms. Notably, PSyKE targets symbolic knowledge in logic form, allowing the extraction of first-order logic clauses. The extracted knowledge is thus both machine- and human-interpretable, and can be used as a starting point for further symbolic processing—e.g. automated reasoning. |
(keywords) Explainable AI, knowledge extraction, interpretable prediction, PSyKE |
Journals & Series
Events
- 22nd Workshop “From Objects to Agents” (WOA 2021) — 01/09/2021–03/09/2021
Publications / Personal
Publications / Views
Home
— clouds
tags | authors | editors | journals
— per year
2023 | 2022 | 2021 | 2020 | 2019 | 2018 | 2017 | 2016 | 2015 | 2014–1927
— per sort
in journal | in proc | chapters | books | edited | spec issues | editorials | entries | manuals | tech reps | phd th | others
— per status
online | in press | proof | camera-ready | revised | accepted | revision | submitted | draft | note
— services
ACM Digital Library | DBLP | IEEE Xplore | IRIS | PubMed | Google Scholar | Scopus | Semantic Scholar | Web of Science | DOI
Publication
— authors
— editors
Roberta Calegari, Giovanni Ciatto, Andrea Omicini, Giuseppe Vizzari
— status
published
— sort
article in journal
— publication date
July 2022
— journal
Intelligenza Artificiale
— volume
16
— issue
1
— pages
27–48
— number of pages
22
URLs
identifiers
— DOI
— DBLP
— IRIS
— Scholar
— Scopus
— WoS / ISI
— print ISSN
1724-8035
— online ISSN
2211-0097