Deep Injective Prior for Inverse Scattering
- URL: http://arxiv.org/abs/2301.03092v2
- Date: Fri, 21 Jul 2023 22:34:18 GMT
- Title: Deep Injective Prior for Inverse Scattering
- Authors: AmirEhsan Khorashadizadeh, Vahid Khorashadizadeh, Sepehr Eskandari,
Guy A.E. Vandenbosch, Ivan Dokmani\'c
- Abstract summary: In electromagnetic inverse scattering, the goal is to reconstruct object permittivity using scattered waves.
Deep learning has shown promise as an alternative to iterative solvers.
We propose a data-driven framework for inverse scattering based on deep generative models.
- Score: 16.36016615416872
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In electromagnetic inverse scattering, the goal is to reconstruct object
permittivity using scattered waves. While deep learning has shown promise as an
alternative to iterative solvers, it is primarily used in supervised frameworks
which are sensitive to distribution drift of the scattered fields, common in
practice. Moreover, these methods typically provide a single estimate of the
permittivity pattern, which may be inadequate or misleading due to noise and
the ill-posedness of the problem. In this paper, we propose a data-driven
framework for inverse scattering based on deep generative models. Our approach
learns a low-dimensional manifold as a regularizer for recovering target
permittivities. Unlike supervised methods that necessitate both scattered
fields and target permittivities, our method only requires the target
permittivities for training; it can then be used with any experimental setup.
We also introduce a Bayesian framework for approximating the posterior
distribution of the target permittivity, enabling multiple estimates and
uncertainty quantification. Extensive experiments with synthetic and
experimental data demonstrate that our framework outperforms traditional
iterative solvers, particularly for strong scatterers, while achieving
comparable reconstruction quality to state-of-the-art supervised learning
methods like the U-Net.
Related papers
- Generative Edge Detection with Stable Diffusion [52.870631376660924]
Edge detection is typically viewed as a pixel-level classification problem mainly addressed by discriminative methods.
We propose a novel approach, named Generative Edge Detector (GED), by fully utilizing the potential of the pre-trained stable diffusion model.
We conduct extensive experiments on multiple datasets and achieve competitive performance.
arXiv Detail & Related papers (2024-10-04T01:52:23Z) - A Diffusion Model Framework for Unsupervised Neural Combinatorial Optimization [7.378582040635655]
Current deep learning approaches rely on generative models that yield exact sample likelihoods.
This work introduces a method that lifts this restriction and opens the possibility to employ highly expressive latent variable models.
We experimentally validate our approach in data-free Combinatorial Optimization and demonstrate that our method achieves a new state-of-the-art on a wide range of benchmark problems.
arXiv Detail & Related papers (2024-06-03T17:55:02Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Unsupervised Discovery of Interpretable Directions in h-space of
Pre-trained Diffusion Models [63.1637853118899]
We propose the first unsupervised and learning-based method to identify interpretable directions in h-space of pre-trained diffusion models.
We employ a shift control module that works on h-space of pre-trained diffusion models to manipulate a sample into a shifted version of itself.
By jointly optimizing them, the model will spontaneously discover disentangled and interpretable directions.
arXiv Detail & Related papers (2023-10-15T18:44:30Z) - Implicit Variational Inference for High-Dimensional Posteriors [7.924706533725115]
In variational inference, the benefits of Bayesian models rely on accurately capturing the true posterior distribution.
We propose using neural samplers that specify implicit distributions, which are well-suited for approximating complex multimodal and correlated posteriors.
Our approach introduces novel bounds for approximate inference using implicit distributions by locally linearising the neural sampler.
arXiv Detail & Related papers (2023-10-10T14:06:56Z) - Observation-Guided Diffusion Probabilistic Models [41.749374023639156]
We propose a novel diffusion-based image generation method called the observation-guided diffusion probabilistic model (OGDM)
Our approach reestablishes the training objective by integrating the guidance of the observation process with the Markov chain.
We demonstrate the effectiveness of our training algorithm using diverse inference techniques on strong diffusion model baselines.
arXiv Detail & Related papers (2023-10-06T06:29:06Z) - Improved sampling via learned diffusions [8.916420423563478]
Recently, a series of papers proposed deep learning-based approaches to sample from target distributions using controlled diffusion processes.
We identify these approaches as special cases of a generalized Schr"odinger bridge problem.
We propose a variational formulation based on divergences between path space measures of time-reversed diffusion processes.
arXiv Detail & Related papers (2023-07-03T17:58:26Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.