Davide Calvaresi, Amro Najjar, Michael Winikoff, Kary Främling (eds.)
Explainable and Transparent AI and Multi-Agent Systems. Third International Workshop, EXTRAAMAS 2021, Virtual Event, May 3–7, 2021, Revised Selected Papers, pages 63-82
Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence) 12688
Springer Nature, Basel, Switzerland
2021
Recently, the Deep Learning (DL) research community has focused on developing efficient and highly performing Neural Networks (NN). Meanwhile, the eXplainable AI (XAI) research community has focused on making Machine Learning (ML) and Deep Learning methods interpretable and transparent, seeking explainability. This work is a preliminary study on the applicability of Neural Architecture Search (NAS) (a sub-field of DL looking for automatic design of NN structures) in XAI. We propose Shallow2Deep, an evolutionary NAS algorithm that exploits local variability to restrain opacity of DL-systems through NN architectures simplification. Shallow2Deep effectively reduces NN complexity – therefore their opacity – while reaching state-of-the-art performances. Unlike its competitors, Shallow2Deep promotes variability of localised structures in NN, helping to reduce NN opacity. The proposed work analyses the role of local variability in NN architectures design, presenting experimental results that show how this feature is actually desirable.