XAI.it 2020

Italian Workshop on Explainable Artificial Intelligence
Università di Milano-Bicocca, Milan, Italy, 25/11/2020–26/11/2020

Artificial Intelligence systems are increasingly playing an increasingly important role in our daily lives. As their importance in our everyday lives grows, it is fundamental that the internal mechanisms that guide these algorithms are as clear as possible. It is not by chance that the recent General Data Protection Regulation (GDPR) emphasized the users' right to explanation when people face artificial intelligence-based technologies. Unfortunately, the current research tends to go in the opposite direction, since most of the approaches try to maximize the effectiveness of the models (e.g., recommendation accuracy) at the expense of the explainability and the transparency. The main research questions which arise from this scenario is straightforward: how can we deal with such a dichotomy between the need for effective adaptive systems and the right to transparency and interpretability? Several research lines are triggered by this question: building transparent intelligent systems, analyzing the impact of opaque algorithms on final users, studying the role of explanation strategies, investigating how to provide users with more control in the behavior of intelligent systems. The workshop tries to address these research lines and aims to provide a forum for the Italian community to discuss problems, challenges and innovative approaches in the various sub-fields of AI.

hosting event
works as
hosting event for talk
page_white_powerpointArgumentation and Logic Programming for Explainable and Ethical AI (AIxIA 2020, 25/11/2020) — Roberta Calegari (Andrea Omicini, Giovanni Sartor, Roberta Calegari)
hosted event for