EXTRAAMAS 2025

   page       attach   
7th International Workshop on EXplainable, Trustworthy, and Responsible AI and Multi-Agent Systems
Detroit, MI, USA, 19/05/2025–20/05/2025

The International Workshop on EXplainable, Trustworthy, and Responsible AI and Multi-Agent Systems (EXTRAAMAS) runs since 2019, and is a well-established workshop and forum. It aims to discuss and disseminate research on explainable artificial intelligence, with a particular focus on intra/inter-agent explainability and cross-disciplinary perspectives. In its 7th edition, EXTRAAMAS 2025 identifies four particular focus topics with the ultimate goal of strengthening cutting-edge foundational and applied research.

  1. XAI Fundamentals.
    EXTRAAMAS encourages the submission of seminal and visionary research papers.
  2. XAI in Action: Applied perspectives.
    EXTRAAMAS explicitly encourages the submission of applied research and demo papers.
  3. Cross-disciplinary Perspectives: XAI and Law, Dialogs, GenAI and prompting, ... .
topics of interest

Track 1: XAI in symbolic and subsymbolic AI: the “AI dichotomy” separating symbolic AKA classical AI from connectionism AI has been persistent for more than seven decades. Nevertheless, the advent of explainable AI has accelerated and intensified the efforts to bridge this gap, since providing faithful explanations of black-box machine learning techniques would necessarily mean combining symbolic and subsymbolic AI. This track aims at discussing the recent works in this hot-topic of AI.
Track 1 chair: Dr. Giovanni Ciatto, University of Bologna.

  • XAI for Machine learning
  • Explainable neural networks
  • Symbolic knowledge injection or extraction
  • Neuro-symbolic computation
  • Computational logic for XAI
  • Multi-agent architectures for XAI
  • Surrogate models for sub-symbolic predictors
  • Explainable planning (XAIP)
  • XAI evaluation

Track 2: XAI in negotiation and conflict resolution: Conflict resolution (e.g., agent-based negotiation, voting, argumentation, etc.) has been a prosperous domain within the MAS community since its foundation. However, as agents and the problems they are tackling become more complex, incorporating explainability becomes vital to assess the usefulness of the supposedly conflict-free solution. This is the main topic of this track, with a special focus on MAS negotiation and explainability.
Track 2 chair: Dr. Reyhan Aydoǧan, Özyeğin University & Delft University of Technology.

  • Explainable conflict resolution techniques/frameworks
  • Explainable negotiation protocols and strategies
  • Explainable recommendation systems
  • Trustworthy voting mechanisms
  • Argumentation for explaining the process itself
  • Argumentation for explaining and supporting the potential outcomes
  • Explainable user/agent profiling (e.g., learning user's preferences or strategies)
  • User studies and assessment of the aforementioned approaches
  • Applications (virtual coaches, robots, IoT)

Track 3: Prompts, Interactive Explainability and Dialogue: Prompts, Interactive Explainability and Dialogues: Appropriate everyday explanations about automated decision-making are context-dependent and interactive. An explanation must fill a 'gap' in the apparent knowledge of the user in a specific context. However, dynamic user modelling is hard. Explanatory dialogue allows designers to try out partial explanations, and fine or adjust the explanations, based on feedback. This potential for dynamic adjustment can only be redeemed, if the system has appropriate interactive capabilities, such as context modelling, user modelling, initiative handling, topic management and grounding. The rapid evolution of LLM and Chatbots has sparked a debate on how to make good use of the interactive capabilities of these new models for explainable AI. The use of LLM also has risks, especially concerning reliability. This triggers relevant methodological questions. How to ensure LLM use reliable data for answering? How to evaluate research based on black-box models? What are good techniques for prompt engineering? In this research track, we welcome new ideas as well as established research outcomes, on the wider topic of Interactive or Social Explainable AI.
Track 3 chair: Dr. Giovanni Ciatto, University of Bologna.

  • Interactive capabilities for XAI
  • Arguments for persuasive explanations
  • Context modelling
  • User modelling
  • Initiative handling
  • Topic modelling
  • Grounding and acknowledgement
  • Prompt engineering
  • Research methodology for LLM applications
  • Responsible LLM applications

Track 4: (X)AI in Law and Ethics: complying with regulation (e.g., GDPR) is among the main objectives for XAI. The right to explanation is key to ensuring transparency of ever more complex AI systems dealing with a multitude of sensitive AI applications. This track discusses works related to explainability in AI ethics, machine ethics, and AI & Law.'
Track 4 chair: Dr. Rachele Carli, Umea University & University of Luxembourg & Prof. Simona Tiribelli Institute for Technology & Global Health, MIT-founded spin-off PathCheck Foundation (Boston, US); Tenure track assistant professor at the University of Macerata (Italy)

  • XAI in AI & Law
  • Fair (X)AI
  • XAI & Machine Ethics
  • Bias reduction
  • Deception and XAI
  • Nudging and XAI
  • Legal issues of XAI
  • Liability and XAI
  • XAI, Transparency, and the Law
  • Enforceability and XAI
  • Culture-aware systems and XAI
hosting event
works as
origin event for publication
page_white_acrobatExplainable, Trustworthy, and Responsible AI and Multi-Agent Systems (edited volume, 2025) — Davide Calvaresi, Amro Najaar, Andrea Omicini, Reyhan Aydogan, Rachele Carli, Giovanni Ciatto, Simona Tiribelli, Kary Främling
page_white_acrobatPreface (editorial/introduction/preface, 2025) — Davide Calvaresi, Amro Najaar, Andrea Omicini, Kary Främling
series event