Modeling clinical assessor intervariability using deep hypersphere encoder-decoder networks

Joost van der Putten (Corresponding author), Fons van der Sommen, Jeroen de Groof, Maarten Struyvenberg, Svitlana Zinger, Wouter Curvers, Erik Schoon, Jacques Bergman, Peter H.N. de With

Research output: Contribution to journalArticleAcademicpeer-review


In medical imaging, a proper gold-standard ground truth as, e.g., annotated segmentations by assessors or experts is lacking or only scarcely available and suffers from large intervariability in those segmentations. Most state-of-the-art segmentation models do not take inter-observer variability into account and are fully deterministic in nature. In this work, we propose hypersphere encoder–decoder networks in combination with dynamic leaky ReLUs, as a new method to explicitly incorporate inter-observer variability into a segmentation model. With this model, we can then generate multiple proposals based on the inter-observer agreement. As a result, the output segmentations of the proposed model can be tuned to typical margins inherent to the ambiguity in the data. For experimental validation, we provide a proof of concept on a toy data set as well as show improved segmentation results on two medical data sets. The proposed method has several advantages over current state-of-the-art segmentation models such as interpretability in the uncertainty of segmentation borders. Experiments with a medical localization problem show that it offers improved biopsy localizations, which are on average 12% closer to the optimal biopsy location.
Original languageEnglish
Pages (from-to)10705–10717
Number of pages13
JournalNeural Computing and Applications
Issue number14
Early online date21 Nov 2019
Publication statusPublished - 1 Jul 2020


  • Deep learning
  • Intervariability modeling
  • Localization
  • segmentation


Dive into the research topics of 'Modeling clinical assessor intervariability using deep hypersphere encoder-decoder networks'. Together they form a unique fingerprint.

Cite this