• Matteo Magnini
    Matteo Magnini, 05/04/2022 08:49

     REVIEW 1 -
    SUBMISSION: 13
    TITLE: On the Design of PSyKI: a Platform for Symbolic Knowledge Injection into Sub-Symbolic Predictors
    AUTHORS: Matteo Magnini, Giovanni Ciatto and Andrea Omicini


     Overall evaluation -
    SCORE: 1 (weak accept)


     TEXT:
    On the Design of PSyKI: A Platform for Symbolic Knowledge Injection into Sub-Symbolic Predictors

     The paper presents a platform providing general purpose support for injecting symbolic knowledge into predictors via different algorithms
    The paper addresses the question of what a black-box predictor learns from the data
    In trying to prevent predictors from being black boxes but to some extent "transparent boxes", it proposes symbolic knowledge injection to control the training of the neural network so that the designer suggests what the predictor may or may not learn. which could be used in complementary approaches or in place of traditional "black box opening" approaches such as their previous work on symbolic knowledge extraction.

    First of all, the paper is well written and easy to follow. The system architecture, workflows, formulas, examples, and analysis of results are clear and consistent with expectations. which makes it suitable for publication.

    However, to my understanding, the proposed work could be used to enable transparency of predictors, but the paper does not elaborate on how to provide meaningful information and clarity on what information is provided and why, or how to extract explanations and transfer them into a human/machine readable format.
    Specifically, the authors suggest that "once NN training is complete, the injection phase is also considered complete, so the Λ layer can be removed and the remaining network can be used as usual." Thus, the reader would assume that the main contribution is to control the learning process during the training phase by injecting symbolic knowledge, but not to make the learning process self-explaining or transparent.
    This might raise some concerns about the relevance of this contribution to the workshop topics.

    Minor comment:
            In Table 4: the way the rules are presented for the (Two of a Kind) and (Three of a Kind) classes is a bit confusing, Okay, I understand that three (R1, . . . , S5 ) ← R1 = R2 ∧ R2 = R3 explicitly says (R1, R2 and R3 should be the same) which is correct, and this is what differentiates it from rule two(R1, . . . , S5 ) ← R1 = R2 ∧ R3 = R4 but being both abbreviated with ellipsis, it is non-trivial for the reader to discover the differences
     

     Some typos :
            abstract: (an unified) => (a unified): also appears in several places
            Introduction. page 2: "to experiment already": I guess that "with" is missing => "to experiment (with) already"
            Section 4.1: "continuos" => "continuous"


     REVIEW 2 -
    SUBMISSION: 13
    TITLE: On the Design of PSyKI: a Platform for Symbolic Knowledge Injection into Sub-Symbolic Predictors
    AUTHORS: Matteo Magnini, Giovanni Ciatto and Andrea Omicini


     Overall evaluation -
    SCORE: 0 (borderline paper)


     TEXT:
    The paper describes a software platform for applying Symbolic Knowledge Injection (SKI) algorithms to Neural Networks (NN).

    The paper first presents the problem: many algorithms, classified into two classes, but little software. Then present a general enough approach capable of executing these algorithms. It also presents a use case.

    I first want to put a disclaimer that I know little of NN and SKI.

    While the paper clearly deals with XAI, presents an important problem and also describes the solution (in the form of software), as well as a use case that proves the usability of SKIs, there are a couple of major flaws.

    Presentation

    The paper contains many typos and wrong English sentences, some of them even have reverse meanings (e.g. However, it is unlikely that knowledge in this form cannot be directly injected - I guess you mean likely or can?). More over, the figures are many times too small to read and some of them should not be contained in the main body of the paper (for example, the full set of datalog rules should be put in the appendix; only the relevant rules for an example should be included).

    Contribution

    The paper surveys various SKI algorithms, the present a framework. The framework clearly does not support all algorithms surveyed (using datalog vs fol, etc.). There is no discussion of which algorithms can be captured.

    The case study clearly shows the usefulness of using SKI. But this is not the goal of the paper. The paper presents a platform and as such, potential users are interested in two main things: Can I use it with my algorithms (discussed above) and how efficient is it. There is no benchmarking on its efficiency.

    Clarity

    The paper introduces an interesting example from the Poker world on p. 9. It would be good if the example was used to demonstrate the whole process. From some reason, it is not used to explain the fuzzification (3.3). I found the paper in general difficult to follow but I guess that an NN/SKI reader interested in using the software will find it much easier.

    If the paper really wants to explain how the system works, it would have been best to describe the whole process on the small example. As mentioned above, the case study seems to me a bit useless but takes quite some space. Also some figures can be moved to the appendix. On the other hand, other figures should be made clearer (and bigger) and the whole process, from the beginning until the end, should be clearly explained using the first example.


     REVIEW 3 -
    SUBMISSION: 13
    TITLE: On the Design of PSyKI: a Platform for Symbolic Knowledge Injection into Sub-Symbolic Predictors
    AUTHORS: Matteo Magnini, Giovanni Ciatto and Andrea Omicini


     Overall evaluation -
    SCORE: 1 (weak accept)


     TEXT:
    The author presents the PSyKI, an open source platform for symbolic knowledge injection into sub-symbolic predictors.

    1. Although the idea of the paper is interesting, the paper is difficult to read and transitions between paragraphs and sentences are not smooth. The authors may rewrite many parts of the paper to improve its readability.
      2. The authors should state that PSyKI is a Python library
      3. The SOTA could be shortened, as well as other sections in the paper.
      5. The authors should not use "hard sentences" without references like "Virtually all of them assume…", "all SKI methods proposed in literature share … ",
       “… common limitation of all SKI methods “, "Virtually all of them assume ..." etc. without reference. You may use "most of the...", or you may use "to the best of our knowledge..."

    Questions:

    1. What's the difference between explainable AI and XAI in the keywords
      2. Why do you take 25,010 records for training and one million for the test. It's unclear. This train/test rate is dependent to the Poker hand dataset or it is a general rate when we use SKI? I understand that one of the SKI advantage is to reduce the number of data for training.
      3. How to find the hyper-parameters of the injector (the new predictor)? The weights are obtained by learning but how about hyper-paramenters ?
      4. The case study highlights the advantages of SKI in general. What's is the added value of PSyKI in these results? It's unclear.

    Minor:
    5. Need reference: "SKI brings a number of key benefits to the training"
    6. specify the domain "case study of injection in a well known domain"

    Typos
    7. Capital letter for the first letters of the keyword
    8. "method that given the given predictor"
    9. conclusion are
    10 etc.

Partita IVA: 01131710376 - Copyright © 2008-2022 APICe@DISI Research Group - PRIVACY