Barrett cancer is a treatable disease when detected at an early stage. However, current screening protocols are often not effective at finding the disease early. Volumetric Laser Endomicroscopy (VLE) is a promising new imaging tool for finding dysplasia in Barrett's esophagus (BE) at an early stage, by acquiring cross-sectional images of the microscopic structure of BE up to 3-mm deep. However, interpretation of VLE scans is difficult for medical doctors due to both the size and subtlety of the gray-scale data. Therefore, algorithms that can accurately find cancerous regions are very valuable for the interpretation of VLE data. In this study, we propose a fully-automatic multi-step Computer-Aided Detection (CAD) algorithm that optimally leverages the effectiveness of deep learning strategies by encoding the principal dimension in VLE data. Additionally, we show that combining the encoded dimensions with conventional machine learning techniques further improves results while maintaining interpretability. Furthermore, we train and validate our algorithm on a new histopathologically validated set of in-vivo VLE snapshots. Additionally, an independent test set is used to assess the performance of the model. Finally, we compare the performance of our algorithm against previous state-of-the-art systems. With the encoded principal dimension, we obtain an Area Under the Curve (AUC) and F1 score of 0.93 and 87.4% on the test set respectively. We show this is a significant improvement compared to the state-of-the-art of 0.89 and 83.1%, respectively, thereby demonstrating the effectiveness of our approach.
- Deep learning
- Volumetric laser endomicroscopy