TY - JOUR
T1 - Interpretable clinical prediction via attention-based neural network
AU - Chen, Peipei
AU - Dong, Wei
AU - Wang, Jinliang
AU - Lu, Xudong
AU - Kaymak, Uzay
AU - Huang, Zhengxing
PY - 2020/7/9
Y1 - 2020/7/9
N2 - BACKGROUND: The interpretability of results predicted by the machine learning models is vital, especially in the critical fields like healthcare. With the increasingly adoption of electronic healthcare records (EHR) by the medical organizations in the last decade, which accumulated abundant electronic patient data, neural networks or deep learning techniques are gradually being applied to clinical tasks by utilizing the huge potential of EHR data. However, typical deep learning models are black-boxes, which are not transparent and the prediction outcomes of which are difficult to interpret. METHODS: To remedy this limitation, we propose an attention neural network model for interpretable clinical prediction. In detail, the proposed model employs an attention mechanism to capture critical/essential features with their attention signals on the prediction results, such that the predictions generated by the neural network model can be interpretable. RESULTS: We evaluate our proposed model on a real-world clinical dataset consisting of 736 samples to predict readmissions for heart failure patients. The performance of the proposed model achieved 66.7 and 69.1% in terms of accuracy and AUC, respectively, and outperformed the baseline models. Besides, we displayed patient-specific attention weights, which can not only help clinicians understand the prediction outcomes, but also assist them to select individualized treatment strategies or intervention plans. CONCLUSIONS: The experimental results demonstrate that the proposed model can improve both the prediction performance and interpretability by equipping the model with an attention mechanism.
AB - BACKGROUND: The interpretability of results predicted by the machine learning models is vital, especially in the critical fields like healthcare. With the increasingly adoption of electronic healthcare records (EHR) by the medical organizations in the last decade, which accumulated abundant electronic patient data, neural networks or deep learning techniques are gradually being applied to clinical tasks by utilizing the huge potential of EHR data. However, typical deep learning models are black-boxes, which are not transparent and the prediction outcomes of which are difficult to interpret. METHODS: To remedy this limitation, we propose an attention neural network model for interpretable clinical prediction. In detail, the proposed model employs an attention mechanism to capture critical/essential features with their attention signals on the prediction results, such that the predictions generated by the neural network model can be interpretable. RESULTS: We evaluate our proposed model on a real-world clinical dataset consisting of 736 samples to predict readmissions for heart failure patients. The performance of the proposed model achieved 66.7 and 69.1% in terms of accuracy and AUC, respectively, and outperformed the baseline models. Besides, we displayed patient-specific attention weights, which can not only help clinicians understand the prediction outcomes, but also assist them to select individualized treatment strategies or intervention plans. CONCLUSIONS: The experimental results demonstrate that the proposed model can improve both the prediction performance and interpretability by equipping the model with an attention mechanism.
KW - Attention mechanism
KW - Clinical prediction
KW - Deep learning
KW - Interpretability
UR - http://www.scopus.com/inward/record.url?scp=85087814717&partnerID=8YFLogxK
U2 - 10.1186/s12911-020-1110-7
DO - 10.1186/s12911-020-1110-7
M3 - Article
C2 - 32646437
AN - SCOPUS:85087814717
SN - 1472-6947
VL - 20
JO - BMC Medical Informatics and Decision Making
JF - BMC Medical Informatics and Decision Making
IS - suppl. 3
M1 - 131
ER -