View interpolation for medical images on autostereoscopic displays

S. Zinger, D. Ruijters, Q.L. Do, P.H.N. With, de

Research output: Contribution to journalArticleAcademicpeer-review

17 Citations (Scopus)
12 Downloads (Pure)

Abstract

We present an approach for efficient rendering and transmitting views to a high-resolution autostereoscopic display for medical purposes. Displaying biomedical images on an autostereoscopic display poses different requirements than in a consumer case. For medical usage, it is essential that the perceived image represents the actual clinical data and offers sufficiently high quality for diagnosis or understanding. Autostereoscopic display of multiple views introduces two hurdles: transmission of multi-view data through a bandwidth-limited channel and the computation time of the volume rendering algorithm. We address both issues by generating and transmitting limited set of views enhanced with a depth signal per view. We propose an efficient view interpolation and rendering algorithm at the receiver side based on texture+depth data representation, which can operate with a limited amount of views. We study the main artifacts that occur during rendering-occlusions, and we quantify them first for a synthetic model and then for real-world biomedical data. The experimental results allow us to quantify the peak signal-to-noise ratio for rendered texture and depth as well as the amount of disoccluded pixels as a function of the angle between surrounding cameras.
Original languageEnglish
Pages (from-to)128-137
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume22
Issue number1
DOIs
Publication statusPublished - 2012

Fingerprint

Interpolation
Display devices
Textures
Volume rendering
Signal to noise ratio
Pixels
Cameras
Bandwidth

Cite this

@article{bf3cec54b4bb475b9a07073e1045cde1,
title = "View interpolation for medical images on autostereoscopic displays",
abstract = "We present an approach for efficient rendering and transmitting views to a high-resolution autostereoscopic display for medical purposes. Displaying biomedical images on an autostereoscopic display poses different requirements than in a consumer case. For medical usage, it is essential that the perceived image represents the actual clinical data and offers sufficiently high quality for diagnosis or understanding. Autostereoscopic display of multiple views introduces two hurdles: transmission of multi-view data through a bandwidth-limited channel and the computation time of the volume rendering algorithm. We address both issues by generating and transmitting limited set of views enhanced with a depth signal per view. We propose an efficient view interpolation and rendering algorithm at the receiver side based on texture+depth data representation, which can operate with a limited amount of views. We study the main artifacts that occur during rendering-occlusions, and we quantify them first for a synthetic model and then for real-world biomedical data. The experimental results allow us to quantify the peak signal-to-noise ratio for rendered texture and depth as well as the amount of disoccluded pixels as a function of the angle between surrounding cameras.",
author = "S. Zinger and D. Ruijters and Q.L. Do and {With, de}, P.H.N.",
year = "2012",
doi = "10.1109/TCSVT.2011.2158362",
language = "English",
volume = "22",
pages = "128--137",
journal = "IEEE Transactions on Circuits and Systems for Video Technology",
issn = "1051-8215",
publisher = "Institute of Electrical and Electronics Engineers",
number = "1",

}

View interpolation for medical images on autostereoscopic displays. / Zinger, S.; Ruijters, D.; Do, Q.L.; With, de, P.H.N.

In: IEEE Transactions on Circuits and Systems for Video Technology, Vol. 22, No. 1, 2012, p. 128-137.

Research output: Contribution to journalArticleAcademicpeer-review

TY - JOUR

T1 - View interpolation for medical images on autostereoscopic displays

AU - Zinger, S.

AU - Ruijters, D.

AU - Do, Q.L.

AU - With, de, P.H.N.

PY - 2012

Y1 - 2012

N2 - We present an approach for efficient rendering and transmitting views to a high-resolution autostereoscopic display for medical purposes. Displaying biomedical images on an autostereoscopic display poses different requirements than in a consumer case. For medical usage, it is essential that the perceived image represents the actual clinical data and offers sufficiently high quality for diagnosis or understanding. Autostereoscopic display of multiple views introduces two hurdles: transmission of multi-view data through a bandwidth-limited channel and the computation time of the volume rendering algorithm. We address both issues by generating and transmitting limited set of views enhanced with a depth signal per view. We propose an efficient view interpolation and rendering algorithm at the receiver side based on texture+depth data representation, which can operate with a limited amount of views. We study the main artifacts that occur during rendering-occlusions, and we quantify them first for a synthetic model and then for real-world biomedical data. The experimental results allow us to quantify the peak signal-to-noise ratio for rendered texture and depth as well as the amount of disoccluded pixels as a function of the angle between surrounding cameras.

AB - We present an approach for efficient rendering and transmitting views to a high-resolution autostereoscopic display for medical purposes. Displaying biomedical images on an autostereoscopic display poses different requirements than in a consumer case. For medical usage, it is essential that the perceived image represents the actual clinical data and offers sufficiently high quality for diagnosis or understanding. Autostereoscopic display of multiple views introduces two hurdles: transmission of multi-view data through a bandwidth-limited channel and the computation time of the volume rendering algorithm. We address both issues by generating and transmitting limited set of views enhanced with a depth signal per view. We propose an efficient view interpolation and rendering algorithm at the receiver side based on texture+depth data representation, which can operate with a limited amount of views. We study the main artifacts that occur during rendering-occlusions, and we quantify them first for a synthetic model and then for real-world biomedical data. The experimental results allow us to quantify the peak signal-to-noise ratio for rendered texture and depth as well as the amount of disoccluded pixels as a function of the angle between surrounding cameras.

U2 - 10.1109/TCSVT.2011.2158362

DO - 10.1109/TCSVT.2011.2158362

M3 - Article

VL - 22

SP - 128

EP - 137

JO - IEEE Transactions on Circuits and Systems for Video Technology

JF - IEEE Transactions on Circuits and Systems for Video Technology

SN - 1051-8215

IS - 1

ER -