Piero Castoldi, Anna Lina Ruscelli, Lorenzo Mucchi, Matti Hämäläinen (a cura di)
2025
The intersection of Artificial Intelligence ( AI) and healthcare has led to significant advancements, particularly through Machine Learning (ML), which utilises large datasets to develop predictive models for diagnosis and treatment as well as identify disease risk factors. Despite their success in clinical medicine, only a few models have been routinely adopted in clinical settings, due to issues related to trustworthiness: it is not clear if and to what extent ML models (learn to) comply with existing medical knowledge, formalised by clinical protocols. To address these concerns, in the field of Trustworthy AI ( TAI), Symbolic Knowledge Injection (SKI) has been proposed as a solution. There, SKI integrates domain-specific expertise encoded as rules into ML models, while retaining their predictive capabilities. Despite their promising results, applicability of SKI in healthcare scenarios has not been thoroughly investigated, yet. Accordingly, in this study, we explore the applicability of SKI methods in the healthcare domain, under a critical lens. In particular, we exploit experiments on medical datasets to evaluate the impact of SKI on the predictive capabilities of ML models (in particular, recall, precision, and F1-score), (ii) their adherence to the medical protocols (e.g., coverage) and (iii) their robustness w.r.t. data and knowledge degradation. Results demonstrate the potential of integrating machine-learned insights with established medical guidelines by improving different clinically relevant metrics.
parole chiave
Symbolic Knowledge Injection, Neurosymbolic AI, Clinical protocols and data