TY - JOUR
T1 - Deep Unfolding with Normalizing Flow Priors for Inverse Problems
AU - Wei, Xinyi
AU - van Gorp, Hans
AU - Gonzalez Carabarin, Lizeth
AU - Freedman, Daniel
AU - Eldar, Yonina C.
AU - van Sloun, Ruud J.G.
PY - 2022/6/3
Y1 - 2022/6/3
N2 - Many application domains, spanning from computational photography to medical imaging, require recovery of high-fidelity images from noisy, incomplete or partial/compressed measurements. State-of-the-art methods for solving these inverse problems combine deep learning with iterative model-based solvers, a concept known as deep algorithm unfolding or unrolling. By combining a-priori knowledge of the forward measurement model with learned proximal image-to-image mappings based on deep networks, these methods yield solutions that are both physically feasible (data-consistent) and perceptually plausible (consistent with prior belief). However, current proximal mappings based on (predominantly convolutional) neural networks only implicitly learn such image priors. In this paper, we propose to make these image priors fully explicit by embedding deep generative models in the form of normalizing flows within the unfolded proximal gradient algorithm, and training the entire algorithm end-to-end for a given task. We demonstrate that the proposed method outperforms competitive baselines on various image recovery tasks, spanning from image denoising to inpainting and deblurring, effectively adapting the prior to the restoration task at hand
AB - Many application domains, spanning from computational photography to medical imaging, require recovery of high-fidelity images from noisy, incomplete or partial/compressed measurements. State-of-the-art methods for solving these inverse problems combine deep learning with iterative model-based solvers, a concept known as deep algorithm unfolding or unrolling. By combining a-priori knowledge of the forward measurement model with learned proximal image-to-image mappings based on deep networks, these methods yield solutions that are both physically feasible (data-consistent) and perceptually plausible (consistent with prior belief). However, current proximal mappings based on (predominantly convolutional) neural networks only implicitly learn such image priors. In this paper, we propose to make these image priors fully explicit by embedding deep generative models in the form of normalizing flows within the unfolded proximal gradient algorithm, and training the entire algorithm end-to-end for a given task. We demonstrate that the proposed method outperforms competitive baselines on various image recovery tasks, spanning from image denoising to inpainting and deblurring, effectively adapting the prior to the restoration task at hand
KW - Deep Unfolding
KW - Normalizing Flows
KW - Inverse Problems
KW - Image Reconstruction
KW - inverse problem
KW - Deep unfolding
KW - normalizing flows
KW - image reconstruction
UR - http://www.scopus.com/inward/record.url?scp=85131741093&partnerID=8YFLogxK
U2 - 10.1109/TSP.2022.3179807
DO - 10.1109/TSP.2022.3179807
M3 - Article
SN - 1053-587X
VL - 70
SP - 2962
EP - 2971
JO - IEEE Transactions on Signal Processing
JF - IEEE Transactions on Signal Processing
ER -