Shallow2Deep: Limit Neural Networks Opacity through Neural Architecture Search

Last modified by Giovanni Ciatto on 16/04/2021 09:13

Andrea Agiollo, Giovanni Ciatto, Andrea Omicini

Recently, the Deep Learning (DL) research community has focused on developing efficient and highly performing Neural Networks (NN). Meanwhile, the eXplainable AI (XAI) research community has focused on making Machine Learning (ML) and Deep Learning methods interpretable and transparent, seeking explainability. This work proposes itself as a preliminary study of the applicability of the Neural Architecture Search (NAS) paradigm, a sub-field of DL looking for automatic design of NN structures, in XAI. We propose Shallow2Deep, an evolutionary NAS algorithm, that exploits local variability to limit opacity of DL-systems, through NN architectures simplification. Shallow2Deep effectively reduces NN complexity, therefore opacity, while reaching state-of-the-art performances. Unlike its competitors, Shallow2Deep promotes variability of localized structures in NN, helping to reduce NN opacity. The proposed work analyses the role of local variability in NN architectures design, presenting experimental results that show the desirability of this feature.

Proceedings of the 3rd International Workshop on EXplainable and TRAnsparent AI and Multi-Agent Systems (EXTRAAMAS 2021) , 2021

Tags:
    

Data

2011 © aliCE Research Group @ DEIS, Alma Mater Studiorum-Università di Bologna
1.1