Davide Calvaresi, Amro Najjar, Michael Winikoff, Kary Främling (eds.)
Explainable and Transparent AI and Multi-Agent Systems, chapter 6, pages 90–108
Lecture Notes in Computer Science 13283
Springer
2022
A long-standing ambition in artificial intelligence is to integrate predictors' inductive features (i.e., learning from examples) with deductive capabilities (i.e., drawing inferences from prior symbolic knowledge). Many algorithms methods in the literature support injection of prior symbolic knowledge into predictors, generally following the purpose of attaining better (i.e., more effective or efficient w.r.t. predictive performance) predictors. However, to the best of our knowledge, running implementations of these algorithms are currently either proof of concepts or unavailable in most cases. Moreover, a unified, coherent software framework supporting them as well as their interchange, comparison and exploitation in arbitrary ML workflows is currently missing. Accordingly, in this paper we present PSyKI, a platform providing general-purpose support to symbolic knowledge injection into predictors via different algorithms.
keywords
Symbolic Knowledge Injection, Explainable AI, XAI, Neural Networks, PSyKI
reference talk
origin event
journal or series
Lecture Notes in Computer Science
(LNCS)
funding project
EXPECTATION — Personalized Explainable Artificial Intelligence for decentralized agents with heterogeneous knowledge
(01/04/2021–31/03/2024)
works as
reference publication for talk