Efficient dense blur map estimation for automatic 2D-to-3D conversion

L.P.J. Vosters, G. Haan, de

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

358 Downloads (Pure)


Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement
Original languageEnglish
Title of host publicationProceedings of Stereoscopic Displays and Applications XXIII, 21-25 March 2012, Burlingame, California
Place of PublicationBellingham
Publication statusPublished - 2012

Publication series

NameProceedings of SPIE
ISSN (Print)0277-786X


Dive into the research topics of 'Efficient dense blur map estimation for automatic 2D-to-3D conversion'. Together they form a unique fingerprint.

Cite this