Adding context information to video analysis for surveillance applications

Research output: Chapter in Book/Report/Conference proceedingChapterAcademicpeer-review

Abstract

Smart surveillance systems become more meaningful if they both grow in reliability and robustness, while simultaneously offering a higher semantic level of understanding. To achieve a higher level of semantic scene understanding, the objects and their actions have to be interpreted in the given context, so that the extraction of contextual information is required. This chapter explores several techniques for extracting the contextual information such as spatial, motion, depth and co-occurrence, depending on applications. Afterwards, the chapter provides specific case studies to evaluate the usefulness of context information, based on: (1) region labeling of the surroundings of objects, (2) motion analysis of the water for moving ships, (3) traffic sign recognition for safety event evaluation and (4) the use of depth signals for obstacle detection. The chapter shows that the previous cases can be solved in an improved way with respect to robustness and semantic understanding. Case studies indicate up to 6.8% improvement of reliable correct object understanding and the novel possibility of labeling scene events as safe/unsafe depending on the object behavior and the detected surrounding context. In this chapter, it is shown that using contextual information improves automated video surveillance analysis, as it not only improves the reliability of moving object detection, but also enables scene understanding that is far beyond object understanding.

Original languageEnglish
Title of host publicationEmerging Research on Networked Multimedia Communication Systems
EditorsDimitris Kanellopoulos
PublisherIGI Global
Chapter5
Pages159-203
Number of pages45
ISBN (Electronic)9781466688513
ISBN (Print)1466688505, 9781466688506
DOIs
Publication statusPublished - 14 Aug 2015

Fingerprint

Semantics
Labeling
Traffic signs
Ships
Water
Object detection
Motion analysis

Cite this

Javanbakhti, S., Bao, X., Creusen, I., Hazelhoff, L., Sanberg, W. P., van de Wouw, D., ... de With, P. H. N. (2015). Adding context information to video analysis for surveillance applications. In D. Kanellopoulos (Ed.), Emerging Research on Networked Multimedia Communication Systems (pp. 159-203). IGI Global. https://doi.org/10.4018/978-1-4666-8850-6.ch005
Javanbakhti, Solmaz ; Bao, Xinfeng ; Creusen, Ivo ; Hazelhoff, Lykele ; Sanberg, Willem P. ; van de Wouw, Denis ; Dubbelman, Gijs ; Zinger, Svitlana ; de With, Peter H.N. / Adding context information to video analysis for surveillance applications. Emerging Research on Networked Multimedia Communication Systems. editor / Dimitris Kanellopoulos . IGI Global, 2015. pp. 159-203
@inbook{37e544faf1a14db1b7d9e5baf222f337,
title = "Adding context information to video analysis for surveillance applications",
abstract = "Smart surveillance systems become more meaningful if they both grow in reliability and robustness, while simultaneously offering a higher semantic level of understanding. To achieve a higher level of semantic scene understanding, the objects and their actions have to be interpreted in the given context, so that the extraction of contextual information is required. This chapter explores several techniques for extracting the contextual information such as spatial, motion, depth and co-occurrence, depending on applications. Afterwards, the chapter provides specific case studies to evaluate the usefulness of context information, based on: (1) region labeling of the surroundings of objects, (2) motion analysis of the water for moving ships, (3) traffic sign recognition for safety event evaluation and (4) the use of depth signals for obstacle detection. The chapter shows that the previous cases can be solved in an improved way with respect to robustness and semantic understanding. Case studies indicate up to 6.8{\%} improvement of reliable correct object understanding and the novel possibility of labeling scene events as safe/unsafe depending on the object behavior and the detected surrounding context. In this chapter, it is shown that using contextual information improves automated video surveillance analysis, as it not only improves the reliability of moving object detection, but also enables scene understanding that is far beyond object understanding.",
author = "Solmaz Javanbakhti and Xinfeng Bao and Ivo Creusen and Lykele Hazelhoff and Sanberg, {Willem P.} and {van de Wouw}, Denis and Gijs Dubbelman and Svitlana Zinger and {de With}, {Peter H.N.}",
year = "2015",
month = "8",
day = "14",
doi = "10.4018/978-1-4666-8850-6.ch005",
language = "English",
isbn = "1466688505",
pages = "159--203",
editor = "{Kanellopoulos }, Dimitris",
booktitle = "Emerging Research on Networked Multimedia Communication Systems",
publisher = "IGI Global",

}

Javanbakhti, S, Bao, X, Creusen, I, Hazelhoff, L, Sanberg, WP, van de Wouw, D, Dubbelman, G, Zinger, S & de With, PHN 2015, Adding context information to video analysis for surveillance applications. in D Kanellopoulos (ed.), Emerging Research on Networked Multimedia Communication Systems. IGI Global, pp. 159-203. https://doi.org/10.4018/978-1-4666-8850-6.ch005

Adding context information to video analysis for surveillance applications. / Javanbakhti, Solmaz; Bao, Xinfeng; Creusen, Ivo; Hazelhoff, Lykele; Sanberg, Willem P.; van de Wouw, Denis; Dubbelman, Gijs; Zinger, Svitlana; de With, Peter H.N.

Emerging Research on Networked Multimedia Communication Systems. ed. / Dimitris Kanellopoulos . IGI Global, 2015. p. 159-203.

Research output: Chapter in Book/Report/Conference proceedingChapterAcademicpeer-review

TY - CHAP

T1 - Adding context information to video analysis for surveillance applications

AU - Javanbakhti, Solmaz

AU - Bao, Xinfeng

AU - Creusen, Ivo

AU - Hazelhoff, Lykele

AU - Sanberg, Willem P.

AU - van de Wouw, Denis

AU - Dubbelman, Gijs

AU - Zinger, Svitlana

AU - de With, Peter H.N.

PY - 2015/8/14

Y1 - 2015/8/14

N2 - Smart surveillance systems become more meaningful if they both grow in reliability and robustness, while simultaneously offering a higher semantic level of understanding. To achieve a higher level of semantic scene understanding, the objects and their actions have to be interpreted in the given context, so that the extraction of contextual information is required. This chapter explores several techniques for extracting the contextual information such as spatial, motion, depth and co-occurrence, depending on applications. Afterwards, the chapter provides specific case studies to evaluate the usefulness of context information, based on: (1) region labeling of the surroundings of objects, (2) motion analysis of the water for moving ships, (3) traffic sign recognition for safety event evaluation and (4) the use of depth signals for obstacle detection. The chapter shows that the previous cases can be solved in an improved way with respect to robustness and semantic understanding. Case studies indicate up to 6.8% improvement of reliable correct object understanding and the novel possibility of labeling scene events as safe/unsafe depending on the object behavior and the detected surrounding context. In this chapter, it is shown that using contextual information improves automated video surveillance analysis, as it not only improves the reliability of moving object detection, but also enables scene understanding that is far beyond object understanding.

AB - Smart surveillance systems become more meaningful if they both grow in reliability and robustness, while simultaneously offering a higher semantic level of understanding. To achieve a higher level of semantic scene understanding, the objects and their actions have to be interpreted in the given context, so that the extraction of contextual information is required. This chapter explores several techniques for extracting the contextual information such as spatial, motion, depth and co-occurrence, depending on applications. Afterwards, the chapter provides specific case studies to evaluate the usefulness of context information, based on: (1) region labeling of the surroundings of objects, (2) motion analysis of the water for moving ships, (3) traffic sign recognition for safety event evaluation and (4) the use of depth signals for obstacle detection. The chapter shows that the previous cases can be solved in an improved way with respect to robustness and semantic understanding. Case studies indicate up to 6.8% improvement of reliable correct object understanding and the novel possibility of labeling scene events as safe/unsafe depending on the object behavior and the detected surrounding context. In this chapter, it is shown that using contextual information improves automated video surveillance analysis, as it not only improves the reliability of moving object detection, but also enables scene understanding that is far beyond object understanding.

UR - http://www.scopus.com/inward/record.url?scp=84957937395&partnerID=8YFLogxK

U2 - 10.4018/978-1-4666-8850-6.ch005

DO - 10.4018/978-1-4666-8850-6.ch005

M3 - Chapter

AN - SCOPUS:84957937395

SN - 1466688505

SN - 9781466688506

SP - 159

EP - 203

BT - Emerging Research on Networked Multimedia Communication Systems

A2 - Kanellopoulos , Dimitris

PB - IGI Global

ER -

Javanbakhti S, Bao X, Creusen I, Hazelhoff L, Sanberg WP, van de Wouw D et al. Adding context information to video analysis for surveillance applications. In Kanellopoulos D, editor, Emerging Research on Networked Multimedia Communication Systems. IGI Global. 2015. p. 159-203 https://doi.org/10.4018/978-1-4666-8850-6.ch005