Semantic foreground inpainting from weak supervision

Chenyang Lu (Corresponding author), Gijs Dubbelman

Research output: Contribution to journalArticleAcademicpeer-review

2 Citations (Scopus)
8 Downloads (Pure)


Semantic scene understanding is an essential task for self-driving vehicles and mobile robots. In our work, we aim to estimate a semantic segmentation map, in which the foreground objects are removed and semantically inpainted with background classes, from a single RGB image. This semantic foreground inpainting task is performed by a single-stage convolutional neural network (CNN) that contains our novel max-pooling as inpainting (MPI) module, which is trained with weak supervision, i.e., it does not require manual background annotations for the foreground regions to be inpainted. Our approach is inherently more efficient than the previous two-stage state-of-the-art method, and outperforms it by a margin of 3% IoU for the inpainted foreground regions on Cityscapes. The performance margin increases to 6% IoU, when tested on the unseen KITTI dataset. The code and the manually annotated datasets for testing are shared with the research community at .
Original languageEnglish
Article number8963753
Pages (from-to)1334-1341
Number of pages8
JournalIEEE Robotics and Automation Letters
Issue number2
Publication statusPublished - Apr 2020


  • Semantic scene understanding
  • computer vision for transportation
  • semantic inpainting
  • weak supervision


Dive into the research topics of 'Semantic foreground inpainting from weak supervision'. Together they form a unique fingerprint.

Cite this