EXTRAAMAS 2019

1st International Workshop on EXplainable TRansparent Autonomous Agents and Multi-Agent Systems
Montreal, QC, Canada, 13/05/2019–14/05/2019

Human decisions are increasingly relying on Artificial Intelligence (AI) techniques implementing autonomous decision making and distributed problem-solving. However, reasoning and dynamics powering such systems are becoming increasingly opaque. Therefore, the societal awareness about the lack of transparency and the need for explainability is rising. As a consequence, new legal constraints and grant solicitations have been defined to enforce transparency and explainability in IT systems. An example is the new General Data Protection Regulation (GDPR) which became effective in Europe in May 2018. Emphasizing the need for transparency in AI systems, recent studies pointed out that equipping intelligent systems with explanative abilities has a positive impact on users, (e.g., contributing to overcome discomfort, confusion, and self-deception due to the lack of understanding). For all these reasons, Explainable Artificial Intelligence (XAI) has recently re-emerged and is considered to be a hot topic in AI, attracting research from domains such as machine learning, robot planning, and multi-agent systems.

Agents and Multi-Agent Systems (MAS) can have two core contributions for XAI. The first is in the context of personal intelligent systems providing tailored and personalized feedback (e.g., recommendations and coaching systems). Autonomous agent and multi-agent approaches have recently gained noticeable results and scientific relevance in different research domains (e.g., e-health, UAVs, smart environments). However, despite possibly being correct, the outcomes of such agent-based systems, as well as their impact and effect on users, can be negatively affected by the lack of clarity and explainability of their dynamics and rationality. Nevertheless, if explainable, their understanding, reliability, and acceptance can be enhanced. In particular, user personal features (e.g., user context, expertise, age, and cognitive abilities), which are already used to compute the outcome, can be employed in the explanation process providing a user-tailored solution.
The second axis is agent/robot teams or mixed human-agent teams. In this context, succeeding in collaboration necessitates a mutual understanding of the status of other agents/users/ their capacities and limitations. This ensures efficient teamwork and avoids potential dangers caused by misunderstandings. In such a scenario, explainability goes beyond single human-agent settings into agent-agent or even mixed agent-human team explainability.

The main aim of this first “International workshop on Explainable Transparent Agent and Multi-Agent Systems” (EXTRAAMAS) is four-folded:
(i) to establish a common ground for the study and development of explainable and understandable autonomous agents, robots and MAS,
(ii) to investigate the potential of agent-based systems in the development of personalized user-aware explainable AI,
(iii) to assess the impact of transparent and explained solutions on the user/agents behaviors, and
(iv) to discuss motivating examples and concrete applications in which the lack of explainability leads to problems, which would be resolved by explainability.
Contributions are encouraged in both theoretical and practical applications for transparent and explainable intelligence in agents and MAS. Papers presenting theoretical contributions, designs, prototypes, tools, subjective user tests, assessment, new or improved techniques, and general survey papers tracking current evolutions and future directions are welcome.

topics of interest

Explainable agent architectures  •  Adaptive and personalized explainable agents  •  Explainable human-robot interaction  •  Expressive robots  •  Explainable planning  •  Explanation visualization  •  Explainable agents’ applications: (e-health, smart environment, driving companion, recommender systems, coaching agents,etc.)  •  Reinforcement learning agents  •  Cognitive and social sciences perspectives on explanations  •  Legal aspects of explainable agen

works as
origin event for publication
page_white_acrobatExplainable, Transparent Autonomous Agents and Multi-Agent Systems (edited volume, 2019) — Davide Calvaresi, Amro Najjar, Michael Schumacher, Kary Främling
series event