Explainability of Large Language Models (LLMs) for Anemia Diagnosis

   page       attach   
Elisa Castagnari
abstract

This work addresses the critical need for transparency in AI used in decision-making processes, particularly in clinical diagnosis. While large language models (LLMs) are increasingly utilized in various applications, their opacity often limits their effectiveness and trustworthiness in clinical settings. The aim is to enhance the explainability of LLMs without compromising their perfor- mance, thereby enabling clinicians to make better, more personalized, and targeted diagnoses. Inspired by clinical guidelines and studies on Deep Reinforcement Learning (DRL) for optimal diagnostic sequences, this research approaches diagnosis as a sequential decision-making problem. Open-source LLMs, such as LLaMA-3 [lla] and Mistral7B v0.3 [JSM+23], are leveraged to learn the most efficient diagnostic pathways from Electronic Health Records (EHRs).
By employing advanced prompting techniques and chain-of-thought reasoning [MTS+24], the ap- proach ensures that the AI’s decision-making process is transparent and interpretable. This ex- plainable AI model allows doctors to understand and evaluate the rationale behind each step, fostering trust and enabling safer, more effective clinical applications.

outcomes