Conditional Injective Flows for Bayesian Imaging
- URL: http://arxiv.org/abs/2204.07664v3
- Date: Mon, 3 Apr 2023 12:13:43 GMT
- Title: Conditional Injective Flows for Bayesian Imaging
- Authors: AmirEhsan Khorashadizadeh, Konik Kothari, Leonardo Salsi, Ali
Aghababaei Harandi, Maarten de Hoop, Ivan Dokmani\'c
- Abstract summary: Injectivity reduces memory footprint and training time while low-dimensional latent space together with architectural innovations.
C-Trumpets enable fast approximation of point estimates like MMSE or MAP as well as physically-meaningful uncertainty quantification.
- Score: 18.561430512510956
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most deep learning models for computational imaging regress a single
reconstructed image. In practice, however, ill-posedness, nonlinearity, model
mismatch, and noise often conspire to make such point estimates misleading or
insufficient. The Bayesian approach models images and (noisy) measurements as
jointly distributed random vectors and aims to approximate the posterior
distribution of unknowns. Recent variational inference methods based on
conditional normalizing flows are a promising alternative to traditional MCMC
methods, but they come with drawbacks: excessive memory and compute demands for
moderate to high resolution images and underwhelming performance on hard
nonlinear problems. In this work, we propose C-Trumpets -- conditional
injective flows specifically designed for imaging problems, which greatly
diminish these challenges. Injectivity reduces memory footprint and training
time while low-dimensional latent space together with architectural innovations
like fixed-volume-change layers and skip-connection revnet layers, C-Trumpets
outperform regular conditional flow models on a variety of imaging and image
restoration tasks, including limited-view CT and nonlinear inverse scattering,
with a lower compute and memory budget. C-Trumpets enable fast approximation of
point estimates like MMSE or MAP as well as physically-meaningful uncertainty
quantification.
Related papers
- Fast constrained sampling in pre-trained diffusion models [77.21486516041391]
Diffusion models have dominated the field of large, generative image models.
We propose an algorithm for fast-constrained sampling in large pre-trained diffusion models.
arXiv Detail & Related papers (2024-10-24T14:52:38Z) - Learning Diffusion Model from Noisy Measurement using Principled Expectation-Maximization Method [9.173055778539641]
We propose a principled expectation-maximization (EM) framework that iteratively learns diffusion models from noisy data with arbitrary corruption types.
Our framework employs a plug-and-play Monte Carlo method to accurately estimate clean images from noisy measurements, followed by training the diffusion model using the reconstructed images.
arXiv Detail & Related papers (2024-10-15T03:54:59Z) - FlowDepth: Decoupling Optical Flow for Self-Supervised Monocular Depth Estimation [8.78717459496649]
We propose FlowDepth, where a Dynamic Motion Flow Module (DMFM) decouples the optical flow by a mechanism-based approach and warps the dynamic regions thus solving the mismatch problem.
For the unfairness of photometric errors caused by high-freq and low-texture regions, we use Depth-Cue-Aware Blur (DCABlur) and Cost-Volume sparsity loss respectively at the input and the loss level to solve the problem.
arXiv Detail & Related papers (2024-03-28T10:31:23Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - Representing Noisy Image Without Denoising [91.73819173191076]
Fractional-order Moments in Radon space (FMR) is designed to derive robust representation directly from noisy images.
Unlike earlier integer-order methods, our work is a more generic design taking such classical methods as special cases.
arXiv Detail & Related papers (2023-01-18T10:13:29Z) - Deep Learning-Based Defect Classification and Detection in SEM Images [1.9206693386750882]
In particular, we train RetinaNet models using different ResNet, VGGNet architectures as backbone.
We propose a preference-based ensemble strategy to combine the output predictions from different models in order to achieve better performance on classification and detection of defects.
arXiv Detail & Related papers (2022-06-20T16:34:11Z) - Denoising Diffusion Restoration Models [110.1244240726802]
Denoising Diffusion Restoration Models (DDRM) is an efficient, unsupervised posterior sampling method.
We demonstrate DDRM's versatility on several image datasets for super-resolution, deblurring, inpainting, and colorization.
arXiv Detail & Related papers (2022-01-27T20:19:07Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.