• Andrea Omicini
    Andrea Omicini, 30/03/2020 12:09

    Dear Giovanni Ciatto,

    We are pleased to inform you that your paper (ID 5): "Agent-Based Explanations in AI: Towards an Abstract Framework" has been accepted for presentation at 2nd International Workshop on EXplanable TRansparent Autonomous Agents and Multi-Agent Systems EXTRAAMAS2020) and inclusion in the post-proceedings as a regular (full) paper. Congratulations! The papers went through a rigorous review process. Each paper was reviewed by at least two program committee members. Enclosed at the bottom of this message, please find the review report for your paper. Please take into account the enclosed comments by the reviewers when preparing the camera-ready version. Moreover, please

    • ensure that you carefully observe both format and page limit (up to 18 pages for the final version.
    • add the PDF of the paper, the latex source code (and images) in a zip file, and a document listing the changes done.
    • name the folder "ID_XX_First_Author_et_al.zip" using the ID of your paper.
    • submit the zipped file in https://www.dropbox.com/request/L9RWYX3YbVB4VSYl0UwL.
    • deadline 1/05/2020.

    IMPORTANT: REGISTRATION * Given the critical world-wide situation due to the COVID19, many institutes have forbidden their employees to travel. Moreover, New Zealand imposed 14days of self-isolation to the incoming travelers. For these reasons, the IFAAMAS board has decided to run the conference virtually -- most likely over https://underline.io/. (more details will be sent in the next days). Please note, that at least one author of your paper must register, upload the video-presentation, and possibly attend the virtual conference participate in Q&A and panel discussions. (more details will follow in the next days).

    Currently, the IFAAMAS board is discussing the registration fees. Please visit https://aamas2020.conference.auckland.ac.nz/registration/ for all the updates and details concerning the registration.

    Once again, thank you for your contribution(s) to EXTRAAMAS 2020. We are looking forward to hearing from you and discussing your work! 

    Sincerely, EXTRAAMAS 2020 Scientific Committee.

    https://extraamas.ehealth.hevs.ch/

    SUBMISSION: 5 TITLE: Agent-Based Explanations in AI: Towards an Abstract Framework


     REVIEW 1 

    SUBMISSION: 5 TITLE: Agent-Based Explanations in AI: Towards an Abstract Framework AUTHORS: Giovanni Ciatto, Michael I. Schumacher, Andrea Omicini and Davide Calvaresi


     Overall evaluation 

    SCORE: 2 (accept)


     TEXT: This paper proposes an abstract framework for studying the explainability of machine learning models. The difference between interpretation and explanation is emphasized. The motivation for this work is clearly described, with the importance nowadays for clear explicability of algorithmic decisions (e.g. GDPR regulation), and most of the machine learning approaches are black boxes with low explicability.

    The paper is well-written, and presents interesting ideas. I believe that it will be a valuable contribution to the workshop. I only have minor remarks and questions, listed below.

    First, concerning the "low computational (or cognitive) effort" associated with interpretability (Section 3, second paragraph), is it possible to give some examples of concrete instances of this low effort? Otherwise, the explanation for the work "easy" is as fuzzy as the word itself.

    Then, here are some typos: Page 3, "or hen they are expected" -> "or when they are expected" Page 5: "The reminder of that paper" -> "The remainder of that paper" Page 5: "the amount of cognitive entities the human mind can at one is very limited", the sentence is unclear. I guess a word is missing after "can"? And also, do you mean "at once"? Page 6: "The reminder of the paper" -> "The remainder of the paper" Page 7, Figure 2: "I_A(X') > I_A(X')" -> "I_A(X') > I_A(X)" Page 8: "We also say the M' is produced" -> "We also say that M' is produced" Page 11: "the authors's definition of explanation does not exactly the one proposed in this paper" -> do you mean "is not exactly"? Or "does not exactly fit"?


     REVIEW 2 

    SUBMISSION: 5 TITLE: Agent-Based Explanations in AI: Towards an Abstract Framework AUTHORS: Giovanni Ciatto, Michael I. Schumacher, Andrea Omicini and Davide Calvaresi


     Overall evaluation 

    SCORE: 1 (weak accept)


     TEXT: The paper presents a framework for characterizing various works done in the area explanation generation for machine learning problems. Specifically, they try to characterize explanation for a machine learning model M as a dialogue between two agents, wherein one of the agents (the explainer) tries to provide information about an analogous model M’ which has a representation that is more interpretable to the end-user (the explainee) and provides similar performance for the input dataset of interest (or the entire dataset in cases of global explanations). They also discuss social aspects of explanatory dialogues, including achievement of mutual understanding between the agent explaining and the one receiving the explanation.

    While I disagree with some of the conclusions/choices made by the framework and feel they overlooked a lot of important prior work. I think the paper is an interesting attempt and makes sense to be include in the workshop. Below are some detailed comments about the current paper

    Explanations for AI systems vs ML systems: The paper focuses on presenting a framework that tries to characterize works done for machine learning systems, but the title of the paper and most of the paper claims its a framework for explanations in AI. There is a lot of work outside ML on explanations for AI approaches like planning, multi-agent systems, CSP etc.. Either update the paper to clarify that this framework is only meant to capture ML works or include discussions on why this framework captures explanation generation works done in other areas.

    Explanation as model reconciliation: The framework discussed in this paper is quite similar to the model reconciliation framework proposed to capture explanations in planning settings. There the framework deals with cases where the explainer provides information about the planning model that is meant to improve the explicability of the plan in question (as evaluated by the explainee). There are a lot of works in this direction, looking at various settings including cases where explanation includes providing information about abstractions of the original model that maintain some required properties of the solution. The current framework should be compared and contrasted against this framework even if the paper chooses to focus solely on ML systems. Papers like 1 and 2 are good starting points to understand the framework. Explanations being Objective: I think the authors’ claim that explanations are objective requires stronger arguments than what is provided in the paper. Particularly given the fact that the paper defines explanation to be information that corresponds to a more interpretable model (a subjective measure) I fail to see how it can be objective. Especially in cases where we are dealing with users with differing levels of background knowledge, it should be possible to come up with information that could explain the decision to certain types of users while not for others. I understand, that the authors try to differentiate between clarity of explanation vs it being an explanation or not. But I would argue that if after receiving the information the user is no closer to understanding the decision then the information doesn’t constitute an actual explanation.

    Explanations as studied in Social Sciences: The problem of what constitutes useful explanations has been the subject of study in many fields outside computer science. If the authors want to propose a general framework that is supposed to capture explanatory dialogues then such works can not be overlooked. The AIJ article 3, would be a great starting place to get a sense of the work done in this direction. The article includes a lot of useful pointers to works that have looked at social aspects of explanations.

    Some smaller points - It would be interesting to include a discussion on how you see works like TCAV and the ones that provide explanations in the form saliency maps fitting into the framework. Here the explanations are being provided at such abstract levels that it is hard to really quantify the fidelity of the model being represented by such information.

    1 Chakraborti T, Sreedharan S, Zhang Y, Kambhampati S. Plan explanations as model reconciliation: Moving beyond explanation as soliloquy. IJCAI 2017 2 Chakraborti, Tathagata, Sarath Sreedharan, and Subbarao Kambhampati. "Human-aware planning revisited: A tale of three models." IJCAI-ECAI XAI/ICAPS XAIP Workshops. 2018. 3Miller, Tim. "Explanation in artificial intelligence: Insights from the social sciences." Artificial Intelligence 267 (2019): 1-38.


     REVIEW 3 

    SUBMISSION: 5 TITLE: Agent-Based Explanations in AI: Towards an Abstract Framework AUTHORS: Giovanni Ciatto, Michael I. Schumacher, Andrea Omicini and Davide Calvaresi


     Overall evaluation 

    SCORE: 2 (accept)


     TEXT: The paper proposes an explainable artificial-intelligence-based approach for multiagent systems. In particular, the framework introduces a clear distinction among two orthogonal, yet interrelated, activities- interpretation and explanation- which can be performed on numeric predictors in order to make them more understandable with the perspective of human beings.

    I will suggest giving the existing formal definitions of interpretation and explanation in the background section with the appropriate references. Figure 1 is not readable so it needs to be enlarged.

    Please explain in detail how the rational agent seeking to understand the model choose between the explainability axis and interpretability axis in section 3.3

    In section 4, the assessment of the work should be done by relating the model explanation and outcome with some examples or some real-life case study to help the readers understand the implications of the proposed framework. The manuscript could be considered for publication should you incorporate these revisions.

2011 © aliCE Research Group @ DEIS, Alma Mater Studiorum-Università di Bologna
1.1