Learning to Complete Partial Observations from Unpaired Prior Knowledge

Chenyang Lu (Corresponding author), Gijs Dubbelman

Research output: Contribution to journalArticleAcademicpeer-review


We present a novel training strategy that allows convolutional encoder-decoder networks, to complete partially observed data by means of hallucination. As input, it takes data from a partially observed domain, for which no complete ground truth is available, and data from an unpaired prior knowledge domain and trains the network in an end-to-end manner. This strategy is demonstrated for the task of completing 2-D road layouts as well as 3-D vehicle shapes. In contrast to alternative approaches, our strategy is compatible with networks that use skip connections, to improve detail in the completed output, while not requiring adversarial supervision. To demonstrate its benefits, our training strategy is benchmarked against two state-of-the-art baselines, one using a two-step auto-encoder training strategy and one using an adversarial strategy. Our novel strategy achieves an improvement up to +12% F-measure on the Cityscapes dataset. The learned network intrinsically generalizes better than the baselines on unseen datasets, which is demonstrated by an improvement up to +24% F-measure on the unseen KITTI dataset. Moreover, our approach outperforms the baselines using the same backbone network on the 3-D shape completion benchmark by reducing the Hamming distance with 15%.
Original languageEnglish
Article number107426
JournalPattern Recognition
Publication statusPublished - Nov 2020


  • Completion
  • Partial observation
  • Prior knowledge
  • Weak supervision


Dive into the research topics of 'Learning to Complete Partial Observations from Unpaired Prior Knowledge'. Together they form a unique fingerprint.

Cite this