A long-standing ambition in artificial intelligence is to integrate predictors' inductive features (i.e., learning from examples) with deductive capabilities (i.e., drawing inferences from prior symbolic knowledge). Many algorithms methods in the literature support injection of prior symbolic knowledge into predictors, generally following the purpose of attaining better (i.e., more effective or efficient w.r.t. predictive performance) predictors. However, to the best of our knowledge, running implementations of these algorithms are currently either proof of concepts or unavailable in most cases. Moreover, an unified, coherent software framework supporting them as well as their interchange, comparison and exploitation in arbitrary ML workflows is currently missing. Accordingly, in this paper we present PSyKI, a platform providing general-purpose support to symbolic knowledge injection into predictors via different algorithms.
hosting event
reference publication
funding project
EXPECTATION — Personalized Explainable Artificial Intelligence for decentralized agents with heterogeneous knowledge
(01/04/2021–31/03/2024)
works as
reference talk for