Deformable image registration using convolutional neural networks

Koen A.J. Eppenhof, Maxime W. Lafarge, Pim Moeskops, Mitko Veta, Josien P.W. Pluim

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

70 Citations (Scopus)
1655 Downloads (Pure)

Abstract

Deformable image registration can be time-consuming and often needs extensive parameterization to perform well on a specific application. We present a step towards a registration framework based on a three-dimensional convolutional neural network. The network directly learns transformations between pairs of three-dimensional images. The outputs of the network are three maps for the x, y, and z components of a thin plate spline transformation grid. The network is trained on synthetic random transformations, which are applied to a small set of representative images for the desired application. Training therefore does not require manually annotated ground truth deformation information. The methodology is demonstrated on public data sets of inspiration-expiration lung CT image pairs, which come with annotated corresponding landmarks for evaluation of the registration accuracy. Advantages of this methodology are its fast registration times and its minimal parameterization.

Original languageEnglish
Title of host publicationMedical Imaging 2018 Image Processing
Place of PublicationBellingham
PublisherSPIE
Number of pages6
ISBN (Electronic)9781510616370
DOIs
Publication statusPublished - 15 Mar 2018
EventSPIE Medical Imaging 2018 - Houston, United States
Duration: 10 Feb 201815 Feb 2018

Publication series

NameProceedings of SPIE
Volume10574

Conference

ConferenceSPIE Medical Imaging 2018
Country/TerritoryUnited States
CityHouston
Period10/02/1815/02/18

Keywords

  • convolutional networks
  • deformable image registration
  • machine learning
  • thoracic CT

Fingerprint

Dive into the research topics of 'Deformable image registration using convolutional neural networks'. Together they form a unique fingerprint.

Cite this