Real-time estimation of the 3D transformation between images with large viewpoint differences in cluttered environments

D.W.J.M. van de Wouw, M.A.R. Pieck, G. Dubbelman, P.H.N. de With

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademic

2 Citations (Scopus)
302 Downloads (Pure)

Abstract

This work focuses on estimating an accurate 3D transformation in real time, which is used to register images acquired from different viewpoints. The main challenges are significant image appearance differences, which originate from lateral displacements and parallax, inconsistencies in our 3D model and achieving real-time execution. To this end, we propose a featurebased method using a single synthesized view, which can cope with significant image appearance differences. The 3D transformation is estimated using an EPnP refinement to minimize the influence of inconsistencies in the 3D model. We demonstrate that the proposed method achieves over 95% transformation accuracy for lateral displacements up to 350 cm, while still achieving 85% accuracy at displacements of 530 cm. Additionally, with a running time of 100 milliseconds, we achieve real-time execution as a result of efficiency optimizations and GPU implementations of time-critical components.
Original languageEnglish
Title of host publicationImage Processing: algorithms and systems XV
EditorsS.S. Agaian, K.O. Egiazarian, A.P. Gotchev
Place of PublicationSpringfield
PublisherSociety for Imaging Science and Technology (IS&T)
Pages109-116
Number of pages8
DOIs
Publication statusPublished - Feb 2017
EventIS&T International Symposium on Electronic Imaging Science and Technology, : Image Processing: Algorithms and Systems XV - Burlingame, United States
Duration: 29 Jan 20172 Feb 2017

Conference

ConferenceIS&T International Symposium on Electronic Imaging Science and Technology,
Country/TerritoryUnited States
CityBurlingame
Period29/01/172/02/17

Fingerprint

Dive into the research topics of 'Real-time estimation of the 3D transformation between images with large viewpoint differences in cluttered environments'. Together they form a unique fingerprint.

Cite this