Patch-Based Experiments with Object Classification in Video Surveillance

R.G.J. Wijnhoven, P.H.N. With, de

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

6 Citations (Scopus)
224 Downloads (Pure)

Abstract

We present a patch-based algorithm for the purpose of object classification in video surveillance. Within detected regions-of-interest (ROIs) of moving objects in the scene, a feature vector is calculated based on template matching of a large set of image patches. Instead of matching direct image pixels, we use Gabor-filtered versions of the input image at several scales. This approach has been adopted from recent experiments in generic object-recognition tasks. We present results for a new typical video surveillance dataset containing over 9,000 object images. Furthermore, we compare our system performance with another existing smaller surveillance dataset. We have found that with 50 training samples or higher, our detection rate is on the average above 95%. Because of the inherent scalability of the algorithm, an embedded system implementation is well within reach.
Original languageEnglish
Title of host publicationProceedings of the 9th international conference on Advanced Concepts for Intelligent Vision Systems (ACIVS 2007) 28-31 August 2007, Delft, The Netherlands
EditorsJ. Blanc-Talon, W. Philips
Place of PublicationBerlin, Germany
PublisherSpringer
Pages285-296
ISBN (Print)978-3-540-74606-5
DOIs
Publication statusPublished - 2007
Eventconference; ACIVS 2007, Delft, The Netherlands; 2007-08-28; 2007-08-31 -
Duration: 28 Aug 200731 Aug 2007

Publication series

NameLecture Notes in Computer Science
Volume4678
ISSN (Print)0302-9743

Conference

Conferenceconference; ACIVS 2007, Delft, The Netherlands; 2007-08-28; 2007-08-31
Period28/08/0731/08/07
OtherACIVS 2007, Delft, The Netherlands

Fingerprint

Dive into the research topics of 'Patch-Based Experiments with Object Classification in Video Surveillance'. Together they form a unique fingerprint.

Cite this