Davide Calvaresi, Amro Najjar, Michael Winikoff, Kary Främling (a cura di)
Explainable and Transparent AI and Multi-Agent Systems, capitolo 6, pp. 90–108
Lecture Notes in Computer Science 13283
Springer
2022
A long-standing ambition in artificial intelligence is to integrate predictors' inductive features (i.e., learning from examples) with deductive capabilities (i.e., drawing inferences from prior symbolic knowledge). Many algorithms methods in the literature support injection of prior symbolic knowledge into predictors, generally following the purpose of attaining better (i.e., more effective or efficient w.r.t. predictive performance) predictors. However, to the best of our knowledge, running implementations of these algorithms are currently either proof of concepts or unavailable in most cases. Moreover, a unified, coherent software framework supporting them as well as their interchange, comparison and exploitation in arbitrary ML workflows is currently missing. Accordingly, in this paper we present PSyKI, a platform providing general-purpose support to symbolic knowledge injection into predictors via different algorithms.
parole chiave
Symbolic Knowledge Injection, Explainable AI, XAI, Neural Networks, PSyKI
presentazione di riferimento
evento origine
rivista o collana
Lecture Notes in Computer Science
(LNCS)
progetto finanziatore
EXPECTATION — Personalized Explainable Artificial Intelligence for decentralized agents with heterogeneous knowledge
(01/04/2021–31/03/2024)
funge da
pubblicazione di riferimento per presentazione