Computational argumentation and symbolic reasoning for explainable

New research efforts toward eXplainable artificial intelligence (XAI) are aimed at mitigating the opacity issue, and pursuing the ultimate goal of building understandable, accountable, and trustable intelligent systems—although still a long way to go. In this context, it is increasingly recognised that symbolic approaches to machine intelligence may have a critical role to play in overcoming the current limitations arising from the intrinsic opacity of sub-symbolic approaches. In particular, among various approaches to XAI, argumentative models have been advocated in both the AI and social science literature, as their dialectical nature appears to fit some desirable features of the explanation activity. Computational argumentation is a well-established paradigm in AI, at the intersection of knowledge representation and reasoning, natural language processing and multi-agent systems. It is based on defining argumentation frameworks including sets of arguments and dialectical relations between them (e.g., attack, support), as well as semantics (e.g., definitions of dialectically acceptable sets of arguments or of dialectical strength of arguments) with accompanying computational machinery. In this talk, I will show how computational argumentation, combined with a variety of mechanisms for mining argumentation frameworks, can be used to support various forms of XAI as well as existing approaches in the field.