Towards XMAS: eXplainability through Multi-Agent Systems

Last modified by Giovanni Ciatto on 30/10/2020 10:34

Andrea Omicini, Giovanni Ciatto, Roberta Calegari, Davide Calvaresi

In the context of the Internet of Things (IoT), intelligent systems (IS) are increasingly relying on Machine Learning (ML) techniques. Given the opaqueness of most ML techniques, however, humans have to rely on their intuition to fully understand the IS outcomes: helping them is the target of eXplainable Artificial Intelligence (XAI). Current solutions – mostly too specific, and simply aimed at making ML easier to interpret – cannot satisfy the needs of IoT, characterised by heterogeneous stimuli, devices, and data-types concurring in the composition of complex information structures. Moreover, Multi-Agent Systems (MAS) achievements and advancements are most often ignored, even when they could bring about key features like explainability and trustworthiness. Accordingly, in this paper we information elicit and discuss the most significant issues affecting modern IS, and (ii) devise the main elements and related interconnections paving the way towards reconciling interpretable and explainable IS using MAS.

Workshop "AI & IoT 2019"
AI*IA 2019, Rende, Italy, 22/11/2019


Partita IVA: 01131710376 - Copyright © 2008-2021 APICe@DISI Research Group - PRIVACY