Towards XMAS: eXplainability through Multi-Agent Systems

   page       BibTeX_logo.png       attach   
Claudio Savaglio, Giancarlo Fortino, Giovanni Ciatto, Andrea Omicini (eds.)
AI&IoT 2019 – Artificial Intelligence and Internet of Things 2019, pages 40-53
CEUR Workshop Proceedings 2502
Sun SITE Central Europe, RWTH Aachen University
November 2019

In the context of the Internet of Things (IoT), intelligent systems (IS) are increasingly relying on Machine Learning (ML) techniques.
Given the opaqueness of most ML techniques, however, humans have to rely on their intuition to fully understand the IS outcomes: helping them is the target of eXplainable Artificial Intelligence (XAI).
Current solutions – mostly too specific, and simply aimed at making ML easier to interpret – cannot satisfy the needs of IoT, characterised by heterogeneous stimuli, devices, and data-types concurring in the composition of complex information structures.
Moreover, Multi-Agent Systems (MAS) achievements and advancements are most often ignored, even when they could bring about key features like explainability and trustworthiness.

Accordingly, in this paper we (i) elicit and discuss the most significant issues affecting modern IS, and (ii) devise the main elements and related interconnections paving the way towards reconciling interpretable and explainable IS using MAS.

keywordsMAS; XMAS; XAI; explainability; road map
origin event
worldAI&IoT 2019@AIIA 2019
journal or series
book CEUR Workshop Proceedings (
works as
reference publication for talk
page_white_powerpointTowards XMAS: eXplainability through Multi-Agent Systems (AI&IoT 2019@AIIA 2019, 22/11/2019) — Giovanni Ciatto (Giovanni Ciatto, Roberta Calegari, Andrea Omicini, Davide Calvaresi)