Semantic foreground inpainting from weak supervision

Chenyang Lu (Corresponding author), Gijs Dubbelman

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Semantic scene understanding is an essential task for self-driving vehicles and mobile robots. In our work, we aim to estimate a semantic segmentation map, in which the foreground objects are removed and semantically inpainted with background classes, from a single RGB image. This semantic foreground inpainting task is performed by a single-stage convolutional neural network (CNN) that contains our novel max-pooling as inpainting (MPI) module, which is trained with weak supervision, i.e., it does not require manual background annotations for the foreground regions to be inpainted. Our approach is inherently more efficient than the previous two-stage state-of-the-art method, and outperforms it by a margin of 3% IoU for the inpainted foreground regions on Cityscapes. The performance margin increases to 6% IoU, when tested on the unseen KITTI dataset. The code and the manually annotated datasets for testing are shared with the research community at https://github.com/Chenyang-Lu/semantic-foreground-inpainting .
Original languageEnglish
Article number8963753
Pages (from-to)1334-1341
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume5
Issue number2
DOIs
Publication statusPublished - Apr 2020

    Fingerprint

Keywords

  • Semantic scene understanding
  • computer vision for transportation
  • semantic inpainting
  • weak supervision

Cite this