Multiview Depth-Image Compression Using an Extended H.264 Encoder

Y. Morvan, D.S. Farin, P.H.N. With, de

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

23 Citations (Scopus)
133 Downloads (Pure)


This paper presents a predictive-coding algorithm for the compression of multiple depth-sequences obtained from a multi-camera acquisition setup. The proposed depth-prediction algorithm works by synthesizing a virtual depth-image that matches the depth-image (of the predicted camera). To generate this virtual depth-image, we use an image-rendering algorithm known as 3D image-warping. This newly proposed prediction technique is employed in a 3D coding system in order to compress multiview depth-sequences. For this purpose, we introduce an extended H.264 encoder that employs two prediction techniques: a block-based motion prediction and the previously mentioned 3D image-warping prediction. This extended H.264 encoder adaptively selects the most efficient prediction scheme for each image-block using a rate-distortion criterion. We present experimental results for several multiview depth-sequences, which show a quality improvement of about 2.5 dB as compared to H.264 inter-coded depth-images.
Original languageEnglish
Title of host publicationProceedings of the 9th international conference on Advanced Concepts for Intelligent Vision Systems (ACIVS 2007) 28-31 August 2007, Delft, The Netherlands
EditorsJ. Blanc-Talon, W. Philips
Place of PublicationBerlin, Germany
ISBN (Print)978-3-540-74606-5
Publication statusPublished - 2007
Eventconference; ACIVS 9, Delft, The Netherlands; 2007-08-28; 2007-08-31 -
Duration: 28 Aug 200731 Aug 2007

Publication series

NameLecture Notes in Computer Science
ISSN (Print)0302-9743


Conferenceconference; ACIVS 9, Delft, The Netherlands; 2007-08-28; 2007-08-31
OtherACIVS 9, Delft, The Netherlands


Dive into the research topics of 'Multiview Depth-Image Compression Using an Extended H.264 Encoder'. Together they form a unique fingerprint.

Cite this