TY - JOUR
T1 - Improving label fusion in multi-atlas based segmentation by locally combining atlas selection and performance estimation
AU - Langerak, T.R.
AU - Heide, van der, U.A.
AU - Kotte, A.N.T.J.
AU - Berendsen, F.F.
AU - Pluim, J.P.W.
PY - 2015
Y1 - 2015
N2 - In multi-atlas based segmentation, a target image is segmented by registering multiple atlas images to this target image and propagating the corresponding atlas segmentations. These propagated segmentations are then combined into a single segmentation in a process called label fusion.
Multi-atlas based segmentation is a segmentation method that allows fully automatic segmentation of image populations that exhibit a large variability in shape and image quality. Fusing the results of multiple atlases makes this technique robust and reliable. Previously, we have presented the SIMPLE method for label fusion and have shown that it outperforms existing methods. However, the downside of this method is its computation time and the fact that it requires a large atlas set. This is not always a problem, but in some cases segmentation may be time-critical or large atlas sets are not available.
This paper presents a new label fusion method which is a local version of the SIMPLE method that has two advantages: when a large atlas set is available it improves the accuracy of label fusion and when this is not the case it gives the same accuracy as the original SIMPLE method, but with considerably fewer atlases. This is made possible by better utilizing the local information contained in propagated segmentations that would otherwise be discarded. Our method (semi-)automatically divides the propagated segmentations in multiple regions. A label fusion process can then be applied to each of these regions separately and the end result can be reconstructed out of multiple partial results. We demonstrate that the number of atlases needed can be reduced to 20 atlases without compromising segmentation quality. Our method is validated in an application to segmentation of the prostate, using an atlas set of 125 manually segmented images.
AB - In multi-atlas based segmentation, a target image is segmented by registering multiple atlas images to this target image and propagating the corresponding atlas segmentations. These propagated segmentations are then combined into a single segmentation in a process called label fusion.
Multi-atlas based segmentation is a segmentation method that allows fully automatic segmentation of image populations that exhibit a large variability in shape and image quality. Fusing the results of multiple atlases makes this technique robust and reliable. Previously, we have presented the SIMPLE method for label fusion and have shown that it outperforms existing methods. However, the downside of this method is its computation time and the fact that it requires a large atlas set. This is not always a problem, but in some cases segmentation may be time-critical or large atlas sets are not available.
This paper presents a new label fusion method which is a local version of the SIMPLE method that has two advantages: when a large atlas set is available it improves the accuracy of label fusion and when this is not the case it gives the same accuracy as the original SIMPLE method, but with considerably fewer atlases. This is made possible by better utilizing the local information contained in propagated segmentations that would otherwise be discarded. Our method (semi-)automatically divides the propagated segmentations in multiple regions. A label fusion process can then be applied to each of these regions separately and the end result can be reconstructed out of multiple partial results. We demonstrate that the number of atlases needed can be reduced to 20 atlases without compromising segmentation quality. Our method is validated in an application to segmentation of the prostate, using an atlas set of 125 manually segmented images.
U2 - 10.1016/j.cviu.2014.09.004
DO - 10.1016/j.cviu.2014.09.004
M3 - Article
SN - 1077-3142
VL - 130
SP - 71
EP - 79
JO - Computer Vision and Image Understanding
JF - Computer Vision and Image Understanding
ER -