Abstract
Objective
In this paper, we propose to evaluate the use of pre-trained convolutional neural networks (CNNs) as a features extractor followed by the Principal Component Analysis (PCA) to find the best discriminant features to perform classification using support vector machine (SVM) algorithm for neonatal sleep and wake states using Fluke® facial video frames. Using pre-trained CNNs as a feature extractor would hugely reduce the effort of collecting new neonatal data for training a neural network which could be computationally expensive. The features are extracted after fully connected layers (FCL’s), where we compare several pre-trained CNNs, e.g., VGG16, VGG19, InceptionV3, GoogLeNet, ResNet, and AlexNet.
Results
From around 2-h Fluke® video recording of seven neonates, we achieved a modest classification performance with an accuracy, sensitivity, and specificity of 65.3%, 69.8%, 61.0%, respectively with AlexNet using Fluke® (RGB) video frames. This indicates that using a pre-trained model as a feature extractor could not fully suffice for highly reliable sleep and wake classification in neonates. Therefore, in future work a dedicated neural network trained on neonatal data or a transfer learning approach is required.
In this paper, we propose to evaluate the use of pre-trained convolutional neural networks (CNNs) as a features extractor followed by the Principal Component Analysis (PCA) to find the best discriminant features to perform classification using support vector machine (SVM) algorithm for neonatal sleep and wake states using Fluke® facial video frames. Using pre-trained CNNs as a feature extractor would hugely reduce the effort of collecting new neonatal data for training a neural network which could be computationally expensive. The features are extracted after fully connected layers (FCL’s), where we compare several pre-trained CNNs, e.g., VGG16, VGG19, InceptionV3, GoogLeNet, ResNet, and AlexNet.
Results
From around 2-h Fluke® video recording of seven neonates, we achieved a modest classification performance with an accuracy, sensitivity, and specificity of 65.3%, 69.8%, 61.0%, respectively with AlexNet using Fluke® (RGB) video frames. This indicates that using a pre-trained model as a feature extractor could not fully suffice for highly reliable sleep and wake classification in neonates. Therefore, in future work a dedicated neural network trained on neonatal data or a transfer learning approach is required.
Original language | English |
---|---|
Article number | 507 |
Number of pages | 6 |
Journal | BMC Research Notes |
Volume | 13 |
DOIs | |
Publication status | Published - 4 Nov 2020 |
Keywords
- Convolutional neural networks (CNNs)
- Feature extraction
- Neonatal sleep
- Sleep and wake classification
- Video electroencephalogram (VEEG)
Fingerprint
Dive into the research topics of 'Can pre-trained convolutional neural networks be directly used as a feature extractor for video-based neonatal sleep and wake classification?'. Together they form a unique fingerprint.Impacts
-
Sleep Medicine
Merel M. van Gilst (Content manager) & M.B. (Beatrijs) van der Hout-van der Jagt (Content manager)
Impact: Research Topic/Theme (at group level)