Symbolic Transfer Learning through Knowledge Manipulation Methods

   page       attach   

In the last decades neuro-symbolic approaches have been vastly explored to bridge connectionist models with symbolic methods. The scientists’ desire is to take advantage of both techniques: getting high predictive performances while being human interpretable. Many contributions have come from different areas: 1. algorithms for extracting symbolic knowledge from machine learning (ML) models – considered black-boxes – to obtain an explanation of their behaviour come from explainable artificial intelligence (XAI) and logics, 2. methods to inject symbolic knowledge into deep neural networks (DNN) to improve their performance, especially in special conditions (e.g., scarcity of data, noise, etc.) predominantly come from data science, 3. hybrid approaches – i.e., using DNN and logic in the same system but as separated entities – come from both areas.
In this work, we outline a research plan to assess what we call symbolic (S) transfer learning (TL). Similarly to the well known TL, the main goal is to transfer information about a domain from a source towards a target (usually they both are ML models). The key difference from TL is that STL does not transfer subsymbolic information – such as layers of a DNN – but symbolic information (e.g., logic predicates). The advantages of relying on symbolic formalisms are manifolds: 1. the information is both human and machine interpretable, 2. the information is concise thanks to intensional representation, 3. the method is target agnostic, i.e., it does not make assumption on the undergoing ML model that will receive the knowledge.

hosting event
reference publication
page_white_acrobatSymbolic Transfer Learning through Knowledge Manipulation Methods (other publication sort, 2023) — Matteo Magnini

cover