Towards XMAS: eXplainability through Multi-Agent Systems

   page       attach   

In the context of the Internet of Things (IoT), intelligent systems (IS) are increasingly relying on Machine Learning (ML) techniques. Given the opaqueness of most ML techniques, however, humans have to rely on their intuition to fully understand the IS outcomes: helping them is the target of eXplainable Artificial Intelligence (XAI). Current solutions – mostly too specific, and simply aimed at making ML easier to interpret – cannot satisfy the needs of IoT, characterised by heterogeneous stimuli, devices, and data-types concurring in the composition of complex information structures. Moreover, Multi-Agent Systems (MAS) achievements and advancements are most often ignored, even when they could bring about key features like explainability and trustworthiness.
Accordingly, in this paper we (i) elicit and discuss the most significant issues affecting modern IS, and (ii) devise the main elements and related interconnections paving the way towards reconciling interpretable and explainable IS using MAS.

hosting event
worldAI&IoT 2019@AIIA 2019
reference publication
page_white_acrobatTowards XMAS: eXplainability through Multi-Agent Systems (paper in proceedings, 2019) — Giovanni Ciatto, Roberta Calegari, Andrea Omicini, Davide Calvaresi