Surgical navigation systems are increasingly used for complex spine procedures to avoid neurovascular injuries and minimize the risk for reoperations. Accurate patient tracking is one of the prerequisites for optimal motion compensation and navigation. Most current optical tracking systems use dynamic reference frames (DRFs) attached to the spine, for patient movement tracking. However, the spine itself is subject to intrinsic movements which can impact the accuracy of the navigation system. In this study, we aimed to detect the actual patient spine features in different image views captured by optical cameras, in an augmented reality surgical navigation (ARSN) system. Using optical images from open spinal surgery cases, acquired by two gray-scale cameras, spinal landmarks were identified and matched in different camera views. A computer vision framework was created for preprocessing of the spine images, detecting and matching local invariant image regions. We compared four feature detection algorithms, Speeded Up Robust Feature (SURF), Maximal Stable Extremal Region (MSER), Features from Accelerated Segment Test (FAST), and Oriented FAST and Rotated BRIEF (ORB) to elucidate the best approach. The framework was validated in 23 patients and the 3D triangulation error of the matched features was < 0 . 5 mm. Thus, the findings indicate that spine feature detection can be used for accurate tracking in navigated surgery.
Funding: The research activity leading to the results of this paper was funded by the H2020-ECSEL Joint Undertaking under Grant Agreement No. 692470 (ASTONISH Project). The research activity leading to the results of this paper was funded by the H2020-ECSEL Joint Undertaking under Grant Agreement No. 692470 (ASTONISH Project).
|H2020-ECSEL Joint Undertaking
|Fuel Cells and Hydrogen Joint Undertaking
- Image analysis for markerless tracking
- Image processing
- Image-guided surgery
- Optical sensing
- Patient tracking
- Spinal surgery