Towards Ethical AI in Healthcare: Enhancing Interpretability in ECG Classification through Physician-Guided Knowledge Distillation

Zijun Sun  •  Enze Ge  •  Ruixin Li
abstract

In the context of medical AI applications, especially for automated diagnosis, interpretability
is a critical ethical requirement. Black-box AI models, although accurate, pose significant
risks in clinical settings due to their lack of transparency and accountability. This project
proposes a practical approach to improve both the performance and interpretability of ECG
(electrocardiogram) classification systems through a novel architecture: Momentum Distillation
Oscillographic Transformer (MDOT). The model integrates clinical knowledge into the training
process using knowledge distillation techniques, along with signal-to-image transformation and
attention mechanisms to provide meaningful, physician-aligned insights. Our aim is to bridge the
gap between technical accuracy and ethical trustworthiness, demonstrating that interpretable
models can achieve high performance while remaining transparent, understandable, and thus
ethically preferable in high-stakes domains like healthcare.

outcomes