XAI: Current Frontiers and the Path Ahead Towards Trustworthy AI

   page       attach   

Artificial Intelligence (AI) is currently being deployed across a diverse spectrum of sophisticated applications; however, the opaqueness inherent in the outcomes of numerous AI models engenders challenges concerning both comprehension and trustworthiness. Usually, it is essential to understand the reasoning behind an AI model’s decision-making. Consequently, the need for the development and deployment of eXplainable AI (XAI) methods and methodologies has materialized, aimed at enhancing the explainability of AI model outputs. XAI has garnered prominence as a subject of substantial research interest within the broader field of AI in recent epochs. The pursuit of achieving explainability frequently entails the integration of symbolic AI or logic-based modalities with machine learning techniques and sub-symbolic methodologies. This blending aims to create AI systems that are both capable of making accurate predictions and decisions while also providing understandable explanations for those predictions and decisions.
In this talk, an overview of the current research and trends in this rapidly emerging area is provided discussing some examples as well. I will also introduce available evaluation metrics as well as open-source packages highlighting possible future research directions. The significance of explainability in the context of establishing trustworthiness is undertaken, encompassing an exploration of challenges and gaps that hinder the achievement of responsible AI from the viewpoint of explainability.

reference publication
page_white_acrobatAssessing and Enforcing Fairness in the AI Lifecycle (paper in proceedings, 2023) — Roberta Calegari, Gabriel G. Castañé, Michela Milano, Barry O’Sullivan
funding project
wrenchAEQUITAS — Assessment and Engineering of eQuitable, Unbiased, Impartial and Trustworthy Ai Systems (01/11/2022–31/10/2025)

Partita IVA: 01131710376 — Copyright © 2008–2023 APICe@DISI – PRIVACY