On the Evaluation of the Symbolic Knowledge Extracted from Black Boxes

   page       BibTeX_logo.png   
AI Ethics
January 2024

As opaque decision systems are being increasingly adopted in almost any application field, issues about their lack of transparency and human readability are a concrete concern for end-users. Amongst existing proposals to associate human-interpretable knowledge with accurate predictions provided by opaque models, there are rule extraction techniques, capable of extracting symbolic knowledge out of opaque models. The quantitative assessment of the extracted knowledge’s quality is still an open issue. For this reason we provide here a first approach to measure the knowledge quality, encompassing several indicators and providing a compact score reflect- ing readability, completeness and predictive performance associated with a symbolic knowledge representation. We also discuss the main criticalities behind our proposal, related to the readability assessment and evaluation, to push future research efforts towards a more robust score formulation.

keywordsExplainable artificial intelligence; Symbolic knowledge extraction; Readability metrics; AutoML
reference talk
page_white_powerpointOn the Evaluation of the Symbolic Knowledge Extracted from Black Boxes (AAAI Spring Symposium 2023 – AITA: AI Trustworthiness Assessment, 27/03/2023) — Roberta Calegari (Federico Sabbatini, Roberta Calegari)
funding project
wrenchTAILOR — Foundations of Trustworthy AI – Integrating Reasoning, Learning and Optimization  (01/09/2020–31/08/2024)
works as
reference publication for talk
page_white_powerpointOn the Evaluation of the Symbolic Knowledge Extracted from Black Boxes (AAAI Spring Symposium 2023 – AITA: AI Trustworthiness Assessment, 27/03/2023) — Roberta Calegari (Federico Sabbatini, Roberta Calegari)