ICE: An Evaluation Metric to Assess Symbolic Knowledge Quality

2024

The automated assessment of symbolic knowledge, derived,
forinstance,fromextractionprocedures,facilitatestheautotuningofma-
chine learning algorithms, obviating inherent biases in subjective human
evaluations.Despiteadvancements,comprehensivemetricsforevaluating
knowledge quality are missing in the literature. To address this gap, our
study introduces ICE, a novel evaluation metric designed to measure the
quality of symbolic knowledge. This metric computes a score by consid-
ering three quality sub-indices, namely, predictive performance, human
readability and completeness, and it can be tailored to suit the specific
requirements of the case at hand by adjusting the weights assigned to
each sub-index. We present here the mathematical formulation of the
ICE score, and show its effectiveness through comparative analyses with
existing quality scores applied to real-world tasks.

keywordsExplainable artificial intelligence· Symbolic knowledge ex- traction· AutoML
origin event
funding project
wrenchAEQUITAS — Assessment and Engineering of eQuitable, Unbiased, Impartial and Trustworthy Ai Systems (01/11/2022–31/10/2025)
wrenchFAIR-PE01-SP08 — Future AI Research – Partenariato Esteso sull'Intelligenza Artificiale – Spoke 8 “Pervasive AI” (01/01/2023–31/12/2025)