Non-uniform crosstalk reduction for dynamic scenes

F.A. Smit, R. Liere, van, B. Fröhlich

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

11 Citations (Scopus)

Abstract

Stereo displays suffer from crosstalk, an effect that reduces or even inhibits the viewer's ability to correctly perceive depth. Previous work on software crosstalk reduction focussed on the preprocessing of static scenes which are viewed from a fixed viewpoint. However, in virtual environments scenes are dynamic, and are viewed from various viewpoints in real-time on large display areas. In this paper, three methods are introduced for reducing crosstalk in virtual environments. A non-uniform crosstalk model is described, which can be used to accurately reduce crosstalk on large display areas. In addition, a novel temporal algorithm is used to address the problems that occur when reducing crosstalk in dynamic scenes. This way, high-frequency jitter caused by the erroneous assumption of static scenes can be eliminated. Finally, a perception based metric is developed that allows us to quantify crosstalk. We provide a detailed description of the methods, discuss their tradeoffs, and compare their performance with existing crosstalk reduction methods
Original languageEnglish
Title of host publicationProceedings of the 2007 IEEE Virtual Reality Conference (VR 2007) 10-14 March 2007, Charlotte,North Carolina, USA
Place of PublicationPiscataway, New Jersey, USA
Publishers.n.
Pages139-146
ISBN (Print)1-424-40905-5
DOIs
Publication statusPublished - 2007
Eventconference; VR 2007, Charlotte, North Carolina, USA; 2007-03-10; 2007-03-14 -
Duration: 10 Mar 200714 Mar 2007

Conference

Conferenceconference; VR 2007, Charlotte, North Carolina, USA; 2007-03-10; 2007-03-14
Period10/03/0714/03/07
OtherVR 2007, Charlotte, North Carolina, USA

Fingerprint

Dive into the research topics of 'Non-uniform crosstalk reduction for dynamic scenes'. Together they form a unique fingerprint.

Cite this