As opaque decision systems are being increasingly adopted in almost any application field, issues about their lack of transparency and human readability are a concrete concern for end-users. Amongst existing proposals to associate human-interpretable knowledge with accurate predictions provided by opaque models, there are rule extraction techniques, capable of extracting symbolic knowledge out of opaque models. The quantitative assessment of the extracted knowledge’s quality is still an open issue. For this reason we provide here a first approach to measure the knowledge quality, encompass- ing several indicators and providing a compact score reflect- ing readability, completeness and predictive performance associated with a symbolic knowledge representation. We also discuss the main criticalities behind our proposal, related to the readability assessment and evaluation, to push future research efforts towards a more robust score formulation.
reference publication
funding project
TAILOR — Foundations of Trustworthy AI – Integrating Reasoning, Learning and Optimization
(01/09/2020–31/08/2024)
works as
reference talk for