Abstract
BACKGROUND: The interpretability of results predicted by the machine learning models is vital, especially in the critical fields like healthcare. With the increasingly adoption of electronic healthcare records (EHR) by the medical organizations in the last decade, which accumulated abundant electronic patient data, neural networks or deep learning techniques are gradually being applied to clinical tasks by utilizing the huge potential of EHR data. However, typical deep learning models are black-boxes, which are not transparent and the prediction outcomes of which are difficult to interpret. METHODS: To remedy this limitation, we propose an attention neural network model for interpretable clinical prediction. In detail, the proposed model employs an attention mechanism to capture critical/essential features with their attention signals on the prediction results, such that the predictions generated by the neural network model can be interpretable. RESULTS: We evaluate our proposed model on a real-world clinical dataset consisting of 736 samples to predict readmissions for heart failure patients. The performance of the proposed model achieved 66.7 and 69.1% in terms of accuracy and AUC, respectively, and outperformed the baseline models. Besides, we displayed patient-specific attention weights, which can not only help clinicians understand the prediction outcomes, but also assist them to select individualized treatment strategies or intervention plans. CONCLUSIONS: The experimental results demonstrate that the proposed model can improve both the prediction performance and interpretability by equipping the model with an attention mechanism.
Original language | English |
---|---|
Article number | 131 |
Number of pages | 9 |
Journal | BMC Medical Informatics and Decision Making |
Volume | 20 |
Issue number | suppl. 3 |
DOIs | |
Publication status | Published - 9 Jul 2020 |
Funding
The publication cost is supported by the National Key Research and Development Program of China under Grant No. 2016YFC1300303, the National Natural Science Foundation of China under Grant No. 61672450, and Philips Research under the Brain Bridge Project. The publication costs for this manuscript were provided partly by the Grant No. 2016YFC1300303 and No. 61672450.
Funders | Funder number |
---|---|
National Natural Science Foundation of China | 61672450 |
National Key Research and Development Program of China | 2016YFC1300303 |
Keywords
- Attention mechanism
- Clinical prediction
- Deep learning
- Interpretability