Study, design, and implement the GRAM symbolic knowledge injection algorithm

Davide Freddi  •  Davide Gardenal
abstract

Predictive modeling for sequential diagnoses is often tackled with deep learn- ing mechanisms, which can achieve promising performance but may lack inter- pretability and struggle with limited sample sizes, especially for rare diagnoses. To address these challenges, GRAM [2] (GRaph-based Attention Model) sup- plements electronic health records (EHR) with hierarchical information from medical ontologies. By representing medical concepts as a combination of their ancestors in the ontology via an attention mechanism, GRAM aims to improve interpretability and mitigate the impact of data insufficiency. This approach allows GRAM to adaptively generalize to higher-level concepts when facing lim- ited data at lower-level concepts, resulting in enhanced predictive performance and more meaningful representations aligned with medical knowledge. In the original work, GRAM was implemented using the Theano [6] framework, which is no longer actively maintained and has been surpassed by more modern deep learning libraries. In this study, we explore the GRAM method in depth and propose an updated implementation using the PyTorch [5] library. Our primary objectives are to replicate the embedding methods proposed in the original work and to create a flexible structure that allows GRAM to be used with various models, such as RNNs [4, 1] and transformer encoders [7], under different con- figurations. This updated implementation aims to provide a more accessible and adaptable framework for leveraging the benefits of GRAM in predictive modeling for sequential diagnoses. The implemented model will be tested with the synthetic EHR dataset Synthea [8] together with the modern SNOMED CT [3] medical ontology.

outcomes