Deep learning-based recognition of key anatomical structures during robot-assisted minimally invasive esophagectomy

R.B. den Boer, T.J.M. Jaspers, C. de Jongh, J.P.W. Pluim, F. van der Sommen, T. Boers, R. van Hillegersberg, M.A.J.M. Van Eijnatten, J.P. Ruurda (Corresponding author)

Onderzoeksoutput: Bijdrage aan tijdschriftTijdschriftartikelAcademicpeer review

4 Citaten (Scopus)
55 Downloads (Pure)

Samenvatting

Objective: To develop a deep learning algorithm for anatomy recognition in thoracoscopic video frames from robot-assisted minimally invasive esophagectomy (RAMIE) procedures using deep learning. Background: RAMIE is a complex operation with substantial perioperative morbidity and a considerable learning curve. Automatic anatomy recognition may improve surgical orientation and recognition of anatomical structures and might contribute to reducing morbidity or learning curves. Studies regarding anatomy recognition in complex surgical procedures are currently lacking. Methods: Eighty-three videos of consecutive RAMIE procedures between 2018 and 2022 were retrospectively collected at University Medical Center Utrecht. A surgical PhD candidate and an expert surgeon annotated the azygos vein and vena cava, aorta, and right lung on 1050 thoracoscopic frames. 850 frames were used for training of a convolutional neural network (CNN) to segment the anatomical structures. The remaining 200 frames of the dataset were used for testing the CNN. The Dice and 95% Hausdorff distance (95HD) were calculated to assess algorithm accuracy. Results: The median Dice of the algorithm was 0.79 (IQR = 0.20) for segmentation of the azygos vein and/or vena cava. A median Dice coefficient of 0.74 (IQR = 0.86) and 0.89 (IQR = 0.30) were obtained for segmentation of the aorta and lung, respectively. Inference time was 0.026 s (39 Hz). The prediction of the deep learning algorithm was compared with the expert surgeon annotations, showing an accuracy measured in median Dice of 0.70 (IQR = 0.19), 0.88 (IQR = 0.07), and 0.90 (0.10) for the vena cava and/or azygos vein, aorta, and lung, respectively. Conclusion: This study shows that deep learning-based semantic segmentation has potential for anatomy recognition in RAMIE video frames. The inference time of the algorithm facilitated real-time anatomy recognition. Clinical applicability should be assessed in prospective clinical studies.

Originele taal-2Engels
Pagina's (van-tot)5164-5175
Aantal pagina's12
TijdschriftSurgical Endoscopy
Volume37
Nummer van het tijdschrift7
DOI's
StatusGepubliceerd - jul. 2023

Vingerafdruk

Duik in de onderzoeksthema's van 'Deep learning-based recognition of key anatomical structures during robot-assisted minimally invasive esophagectomy'. Samen vormen ze een unieke vingerafdruk.

Citeer dit