Pattern Discovery in Clinical Workflows for Anemia Diagnosis

   page       attach   
abstract

Explainability is a critical requirement in clinical decision-making, where the reasoning behind diagnostic pathways must be clear and transparent. In clinical practice, diagnoses are often made by following established sequences of steps, such as laboratory tests, observations, or imaging, which are outlined in guidelines created by expert organizations. Inspired by these clinical guidelines, the study focuses on developing explainable diagnostic pathways using Large Language Models (LLMs). This work has been tested with two LLMs—Large Language Model Meta AI (LLaMA) and Mistral—on a synthetic yet realistic dataset to differentially diagnose anemia and its subtypes. By leveraging advanced prompting techniques, the aim is to enhance the transparency and interpretability of the decision-making process, generating diagnostic pathways that could be easily understood and validated by clinicians. The experiments demonstrated that LLMs have significant potential for discovering and explaining clinical pathways from patient data, with LLaMA consistently outperforming Mistral across various metrics. Utilizing the Chain of Thought technique, we analyzed the rationale behind each model’s diagnostic decisions, providing insights into their reasoning processes and the nature of errors they tend to make. These findings underscore the importance of explainability in the application of LLMs to clinical diagnostics and offer valuable insights into how these models can be better aligned with clinical standards. The patterns will be compared using RDF and P-Plan ontology, providing an explainable structure.

outcomes