Shallow2Deep: Restraining Neural Networks Opacity through Neural Architecture Search

   page       attach   

Recently, the Deep Learning (DL) research community has focused on developing efficient and highly performing Neural Networks (NN). Meanwhile, the eXplainable AI (XAI) research community has focused on making Machine Learning (ML) and Deep Learning methods interpretable and transparent, seeking explainability. This work is a preliminary study on the applicability of Neural Architecture Search (NAS) (a sub-field of DL looking for automatic design of NN structures) in XAI.
We propose Shallow2Deep, an evolutionary NAS algorithm that exploits local variability to restrain opacity of DL-systems through NN architectures simplification. Shallow2Deep effectively reduces NN complexity – therefore their opacity – while reaching state-of-the-art performances. Unlike its competitors, Shallow2Deep promotes variability of localised structures in NN, helping to reduce NN opacity. The proposed work analyses the role of local variability in NN architectures design, presenting experimental results that show how this feature is actually desirable.

hosting event
worldEXTRAAMAS 2021@AAMAS 2021
reference publication
page_white_acrobatShallow2Deep: Restraining Neural Networks Opacity through Neural Architecture Search (paper in proceedings, 2021) — Andrea Agiollo, Giovanni Ciatto, Andrea Omicini
funding project
wrenchEXPECTATION — Personalized Explainable Artificial Intelligence for decentralized agents with heterogeneous knowledge (01/04/2021–31/03/2024)
wrenchStairwAI — Stairway to AI: Ease the Engagement of Low-Tech users to the AI-on-Demand platform through AI (01/01/2021–31/12/2023)
works as
reference talk for
page_white_acrobatShallow2Deep: Restraining Neural Networks Opacity through Neural Architecture Search (paper in proceedings, 2021) — Andrea Agiollo, Giovanni Ciatto, Andrea Omicini