A pretrain-finetune approach for improving model generalizability in outcome prediction of acute respiratory distress syndrome patients

Songlu Lin, Meicheng Yang, Chengyu Liu, Zhihong Wang, Xi Long (Corresponding author)

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Background
Early prediction of acute respiratory distress syndrome (ARDS) of critically ill patients in intensive care units (ICUs) has been intensively studied in the past years. Yet a prediction model trained on data from one hospital might not be well generalized to other hospitals. It is therefore essential to develop an accurate and generalizable ARDS prediction model adaptive to different hospital or medical centers.

Methods
We analyzed electronic medical records of 200,859 and 50,920 hospitalized patients within 24 h after being diagnosed with ARDS from the Philips eICU Institute (eICU-CRD) and the Medical Information Mart for Intensive Care (MIMIC-IV) dataset, respectively. Patients were sorted into three groups, including rapid death, long stay, and recovery, based on their condition or outcome between 24 and 72 h after ARDS diagnosis. To improve prediction performance and generalizability, a “pretrain-finetune” approach was applied, where we pretrained models on the eICU-CRD dataset and performed model finetuning using only a part (35%) of the MIMIC-IV dataset, and then tested the finetuned models on the remaining data from the MIMIC-IV dataset. Well-known machine-learning algorithms, including logistic regression, random forest, extreme gradient boosting, and multilayer perceptron neural networks, were employed to predict ARDS outcomes. Prediction performance was evaluated using the area under the receiver-operating characteristic curve (AUC).

Results
Results show that, in general, multilayer perceptron neural networks outperformed the other models. The use of pretrain-finetune yielded improved performance in predicting ARDS outcomes achieving a micro-AUC of 0.870 for the MIMIC-IV dataset, an improvement of 0.046 over the pretrain model.

Conclusions
The proposed pretrain-finetune approach can effectively improve model generalizability from one to another dataset in ARDS prediction.
Original languageEnglish
Article number105397
Number of pages8
JournalInternational Journal of Medical Informatics
Volume186
DOIs
Publication statusPublished - Jun 2024

Keywords

  • Acute respiratory distress syndrome
  • prediction
  • generalizability
  • pretrain
  • finetune
  • intensive care unit

Fingerprint

Dive into the research topics of 'A pretrain-finetune approach for improving model generalizability in outcome prediction of acute respiratory distress syndrome patients'. Together they form a unique fingerprint.

Cite this