- APICe
- Pubblicazioni
- Symbolic knowledge extraction from opaque ML predictors in PSyKE: Platform design & experiments
Symbolic knowledge extraction from opaque ML predictors in PSyKE: Platform design & experiments
- Manage
- Copia
- Actions
- Esporta
- Annotazioni
- Anteprima stampa
Choose the export format from the list below:
- Office Formats (1)
-
Export as Portable Document Format (PDF) using the Web Browser
-
- Other Formats (2)
-
-
Export as HyperText Markup Language (HTML)
-
Federico Sabbatini, Giovanni Ciatto, Roberta Calegari, Andrea Omicini
Intelligenza Artificiale 16(1), pp. 27–48
luglio 2022
A common practice in modern explainable AI is to post-hoc explain black-box machine learning (ML) predictors – such as neural networks – by extracting symbolic knowledge out of them, in the form of either rule lists or decision trees. By acting as a surrogate model, the extracted knowledge aims at revealing the inner working of the black box, thus enabling its inspection, representation, and explanation. Various knowledge-extraction algorithms have been presented in the literature so far. Unfortunately, running implementations of most of them are currently either proofs of concept or unavailable. In any case, a unified, coherent software framework supporting them all – as well as their interchange, comparison, and exploitation in arbitrary ML workflows – is currently missing. Accordingly, in this paper we discuss the design of PSyKE, a platform providing general-purpose support to symbolic knowledge extraction from different sorts of black-box predictors via many extraction algorithms. Notably, PSyKE targets symbolic knowledge in logic form, allowing the extraction of first-order logic clauses. The extracted knowledge is thus both machine- and human-interpretable, and can be used as a starting point for further symbolic processing—e.g. automated reasoning. |
(keywords) Explainable AI, knowledge extraction, interpretable prediction, PSyKE |
Riviste & collane
Eventi
- 22nd Workshop “From Objects to Agents” (WOA 2021) — 01/09/2021–03/09/2021
Pubblicazioni / Personali
Pubblicazioni / Viste
Home
— nuvole
tag | autori & autrici | curatori & curatrici | riviste
— per anno
2023 | 2022 | 2021 | 2020 | 2019 | 2018 | 2017 | 2016 | 2015 | 2014–1927
— per tipo
su rivista | in atti | capitoli | libri | curatele | speciali | editoriali | rapporti | tesi phd | altre
— per stato
online | in stampa | bozza stampa | camera-ready | revisionato | accettato | in revisione | sottoposto | bozza | nota
— servizi
ACM Digital Library | DBLP | IEEE Xplore | IRIS | PubMed | Google Scholar | Scopus | Semantic Scholar | Web of Science | DOI
Pubblicazione
— autori/autrici
— a cura di
Roberta Calegari, Giovanni Ciatto, Andrea Omicini, Giuseppe Vizzari
— stato
pubblicato
— tipo
articolo su rivista
— data di pubblicazione
luglio 2022
— rivista
Intelligenza Artificiale
— volume
16
— numero
1
— pagine
27–48
— numero di pagine
22
URL
identificatori
— DOI
— DBLP
— IRIS
— Scholar
— Scopus
— WoS / ISI
— print ISSN
1724-8035
— online ISSN
2211-0097