|
|
Nowadays ML models are very accurate in solving tasks however they are extremely complex and they lack interpretability. This can be a paramount issue in several critical domains (e.g., healthcare, security, finance, etc.). A post hoc explanation of the model is a way to tackle this problem. The explanation can be done via Symbolic Knowledge Extraction (SKE), i.e. generate a symbolic representation of the behavior of the predictor that is human interpretable. Dually, the knowledge of a human domain expert (or other software) can be provided to a ML model to harness its behavior in such a way that the predictor is compliant with the knowledge in order to be more accurate, require less data/time to be trained, etc. This process is called Symbolic Knowledge Injection (SKI). In this talk, common concepts of ML and logic are introduced along with the definitions of SKE and SKI. Then, different possible ways of performing SKE and SKI are presented and discussed (taxonomies). Finally, two technologies, PSyKE and PSyKI, are shown with working examples of SKE and SKI implemented algorithms.