Context-based region labeling for event detection in surveillance video

S. Javanbakhti, S. Zinger, P.H.N. With, de

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

6 Citations (Scopus)
1 Downloads (Pure)

Abstract

Automatic natural scene understanding and annotating regions with semantically meaningful labels, such as road or sky, are key aspects of image and video analysis. The annotation of regions is a considered helpful for improving the object-of-interest detection because the object position in the scene is also exploited. For a reliable model of a scene and associated context information, the labeling task involves image analysis at multiple, both global and local, scene levels. In this paper, we develop a general framework for performing automatic semantic labeling of video scenes by combining the local features and spatial contextual cues. While maintaining a high accuracy, we pursue an algorithm with low computational complexity, so that it is suitable for real-time implementation in embedded video surveillance. We apply our approach to a complex surveillance use case and to three different datasets: WaterVisie [1], LabelMe [2] and our own dataset. We show that our method quantitatively and qualitatively outperforms two sate-of-the-art approaches [3][4].
Original languageEnglish
Title of host publicationProceedings 2014 International Conference on Information Science, Electronics and Electrical Engineering (ISEEE), 26-28 april 2014, Sapporo, Japan
Place of PublicationPiscataway
PublisherInstitute of Electrical and Electronics Engineers
Volume3
ISBN (Print)978-1-4799-3196-5
DOIs
Publication statusPublished - 2014
Eventconference; ISEEE; 2014-04-26; 2014-04-28 -
Duration: 26 Apr 201428 Apr 2014

Conference

Conferenceconference; ISEEE; 2014-04-26; 2014-04-28
Period26/04/1428/04/14
OtherISEEE

Fingerprint Dive into the research topics of 'Context-based region labeling for event detection in surveillance video'. Together they form a unique fingerprint.

Cite this