• Unknown User
    Unknown User, 23/05/2019 11:52

    Dear Roberta Calegari,

    we are pleased to inform you that your paper

    Interpretable Narrative Explanation for ML Predictors with LP: A Case Study for XAI

    has been accepted as full paper at WOA 2019, and it will appear in the proceedings of the workshop.

    Below you can find the Reviewers’ comments. Please take them into account when revising your paper.

    The camera ready version of the paper should be uploaded on Easychair. Please upload a ZIP file containing:

    • the source file(s) of the paper (.tex or .doc)
    • the camera ready version of the paper (.pdf)
    • all additional needed files (e.g., figures)

    The paper should be uploaded by June, 5th. 

    You will soon receive a message asking for the details of your participation. Please note that at least one of the authors is expected to attend the workshop.

    Please do not hesitate to contact us for further requirements.

    Best regards, Stefania and Federico

    SUBMISSION: 4 TITLE: Interpretable Narrative Explanation for ML Predictors with LP: A Case Study for XAI


     REVIEW 1 

    SUBMISSION: 4 TITLE: Interpretable Narrative Explanation for ML Predictors with LP: A Case Study for XAI AUTHORS: Roberta Calegari, Giovanni Ciatto, Jason Dellaluce and Andrea Omicini


     Overall evaluation 

    SCORE: 2 (accept)


     TEXT: L'articolo tratta un problema estremamente rilevante ed attuale nel campo della AI, ovvero la "explainability" dei risultati prodotti da approcci opachi di machine learning. L'articolo propone di tradurre Decision Trees (che potrebbero essere estratti in modo automatico da classificatori basati su altre tecniche di ML, ma questa parte non è trattata nell'articolo) in Prolog, utilizzare quest'ultimo per ragionare sul processo di decisione e spiegarlo in modo narrativo.

    L'articolo introduce il background ed il contesto, presenta la soluzione proposta che consiste anche di un prototipo funzionante e riassume le conclusioni. 

    L'articolo è coerente con le finalità di WOA, anche se il ruolo degli agenti intelligenti potrebbe essere maggiormente enfatizzato. 

    Dal punto di vista tecnico, ho qualche dubbio su "The main idea behind such an approach is to build a DT approximating the behaviour of a given predictor": anche se l'estrazione di un DT che approssimi il comportamento di un predittore basato su altre tecniche di ML non è oggetto di questo articolo, una discussione su quanto possa essere "buona" l'approssimazione, e quanto la sua qualità possa influenzare la qualità della "spiegabilità", sembrano aspetti rilevanti da trattare. L'articolo assume infatti che un DT sia stato estratto, ma se il DT non rappresenta in modo fedele il funzionamento del sottostante modello black box, spiegare come funzione il DT diventa poco utile. 

    Infine, l'articolo contiene molti errori e refusi, alcuni dei quali sono riportati sotto: gli autori dovrebbero rileggere accuratamente l'articolo. 

    Nelle prime due pagine

    loan to a given costumer -> customer

    a set of symptoms have been -> hashelp set è singolare, credo regga un verbo singolare

    virtually any person, as a consumer of services and goods, let -> lets

    In fact, in several contexts, it is not to let intelligent agents output some bare decision, but the corre- sponding explanation is... -> la frase non mi è chiarissima

    These issues are particularly challenging in critical application scenarios such as healthcare, often involving the use of image (i.e., identifiable) data from children. -> rimuverei "from children", mi pare che il problema esista anche per immagini di adulti

    the first potential concern is to develop algorithmic bias -> "to develop" o "to avoid"?

    Then Section III introduces the our vision -> rimuovere "the"

    each xi represent an instance -> represents

    for which the epected -> expected

    For instance, it is widely acknowledged how generalised linear models (GLM) are more interpretable than neural networks (NN) -> "how" dovrebbe essere "that"

    The state of the art for expandability -> expandability? O explainability?

    why do p predicts y for the input x? how do p builds its predictions? -> why does p predict... how does p build...

    Indeed, some approaches adopts -> adopt

    with the premise of potentially minimising -> premise o promise?

    Generally speaking, we believe the intelligence decision pro- cess accounts for this two kind of rules -> intelligent decision process?

    interaction of different kind -> kinds

    to combine the specific feature -> features

    The first prototype we design and implement -> designed and implemented?

    With respect to Fig. 1, we experiment -> experimented?

    can build a unique decision output Result that com- bine the two different diseases -> that combines

    and making prediction -> predictions


     REVIEW 2 

    SUBMISSION: 4 TITLE: Interpretable Narrative Explanation for ML Predictors with LP: A Case Study for XAI AUTHORS: Roberta Calegari, Giovanni Ciatto, Jason Dellaluce and Andrea Omicini


     Overall evaluation 

    SCORE: 1 (weak accept)


     TEXT: The paper documents a few early experiments performed to study an approach to support eXplainable Artificial Intelligence (XAI). After a lengthy discussion on the motivations of the work, the paper proposes to make sub-symbolic Machine Learning (ML) classifiers explainable by associating them with appropriate logic programs. While the grand objectives of the work are worth spending an entire research career in the attempt to reach them, the paper documents experiments that are still too preliminary to be considered significative. The proposed approach sounds good, but a number of subtle problems may result in the limited applicability of the proposed approach. Just to cite a few examples: How does the incompleteness of first-order reasoning interwinds with the presented approach? How can we ensure that sufficiently abstract logic programs are generated? (I.e., that the generated programs are not mere coupling between input and output patterns?) I suggest that authors add some text to discuss the effort of DARPA toward XAI, because I think that it cannot be considered background knowledge of the workshop attendees.

Partita IVA: 01131710376 - Copyright © 2008-2021 APICe@DISI Research Group - PRIVACY