SemiVT-Surge: Semi-Supervised Video Transformer for Surgical Phase Recognition

Yiping Li, Ronald de Jong, Sahar Nasirihaghighi, Tim Jaspers, Romy van Jaarsveld, Gino Kuiper, Richard van Hillegersberg, Fons van der Sommen, Jelle Ruurda, Marcel Breeuwer, Yasmina Al Khalil

Research output: Working paperPreprintAcademic

9 Downloads (Pure)

Abstract

Accurate surgical phase recognition is crucial for computer-assisted interventions and surgical video analysis. Annotating long surgical videos is labor-intensive, driving research toward leveraging unlabeled data for strong performance with minimal annotations. Although self-supervised learning has gained popularity by enabling large-scale pretraining followed by fine-tuning on small labeled subsets, semi-supervised approaches remain largely underexplored in the surgical domain. In this work, we propose a video transformer-based model with a robust pseudo-labeling framework. Our method incorporates temporal consistency regularization for unlabeled data and contrastive learning with class prototypes, which leverages both labeled data and pseudo-labels to refine the feature space. Through extensive experiments on the private RAMIE (Robot-Assisted Minimally Invasive Esophagectomy) dataset and the public Cholec80 dataset, we demonstrate the effectiveness of our approach. By incorporating unlabeled data, we achieve state-of-the-art performance on RAMIE with a 4.9% accuracy increase and obtain comparable results to full supervision while using only 1/4 of the labeled data on Cholec80. Our findings establish a strong benchmark for semi-supervised surgical phase recognition, paving the way for future research in this domain.
Original languageEnglish
PublisherarXiv.org
Number of pages12
Volume2506.01471
DOIs
Publication statusPublished - 2 Jun 2025

Bibliographical note

Accepted for MICCAI 2025

Keywords

  • cs.CV

Fingerprint

Dive into the research topics of 'SemiVT-Surge: Semi-Supervised Video Transformer for Surgical Phase Recognition'. Together they form a unique fingerprint.

Cite this