- explanation should be an essential tool for any intelligent component—in particular, agents in multi-agent systems
- intelligent agents should be able to able to explicitly represent their cognitive process and its results, and manipulate those representations so that rational explanation would properly complement their ability to reason and communicate
- intelligent agents should explain themselves first of all to other agents, not just a system overall towards humans
- symbolic techniques are to be used for explanations—representing and manipulating cognitive processes and their results
- so, as first-class citizens in both agent modelling and intelligent systems engineering—e.g., argumentation
hosting event
funding project
EXPECTATION — Personalized Explainable Artificial Intelligence for decentralized agents with heterogeneous knowledge
(01/04/2021–31/03/2024)