Imaging Interiors: An Implicit Solution to Electromagnetic Inverse Scattering Problems
- URL: http://arxiv.org/abs/2407.09352v1
- Date: Fri, 12 Jul 2024 15:25:54 GMT
- Title: Imaging Interiors: An Implicit Solution to Electromagnetic Inverse Scattering Problems
- Authors: Ziyuan Luo, Boxin Shi, Haoliang Li, Renjie Wan,
- Abstract summary: Electromagnetic Inverse Scattering Problems (EISP) have gained wide applications in computational imaging.
This paper tackles those challenges in EISP via an implicit approach.
Our approach outperforms existing methods on standard benchmark datasets.
- Score: 74.28677741399966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Electromagnetic Inverse Scattering Problems (EISP) have gained wide applications in computational imaging. By solving EISP, the internal relative permittivity of the scatterer can be non-invasively determined based on the scattered electromagnetic fields. Despite previous efforts to address EISP, achieving better solutions to this problem has remained elusive, due to the challenges posed by inversion and discretization. This paper tackles those challenges in EISP via an implicit approach. By representing the scatterer's relative permittivity as a continuous implicit representation, our method is able to address the low-resolution problems arising from discretization. Further, optimizing this implicit representation within a forward framework allows us to conveniently circumvent the challenges posed by inverse estimation. Our approach outperforms existing methods on standard benchmark datasets. Project page: https://luo-ziyuan.github.io/Imaging-Interiors
Related papers
- ODE-DPS: ODE-based Diffusion Posterior Sampling for Inverse Problems in Partial Differential Equation [1.8356973269166506]
We introduce a novel unsupervised inversion methodology tailored for solving inverse problems arising from PDEs.
Our approach operates within the Bayesian inversion framework, treating the task of solving the posterior distribution as a conditional generation process.
To enhance the accuracy of inversion results, we propose an ODE-based Diffusion inversion algorithm.
arXiv Detail & Related papers (2024-04-21T00:57:13Z) - Solving General Noisy Inverse Problem via Posterior Sampling: A Policy Gradient Viewpoint [21.22750301965104]
We leverage a pretrained diffusion generative model to solve a wide range of image inverse tasks without task specific model fine-tuning.
To precisely estimate the guidance score function of the input image, we propose Diffusion Policy Gradient (DPG)
Experiments show that our method is robust to both Gaussian and Poisson noise degradation on multiple linear and non-linear inverse tasks.
arXiv Detail & Related papers (2024-03-15T16:38:47Z) - NeISF: Neural Incident Stokes Field for Geometry and Material Estimation [50.588983686271284]
Multi-view inverse rendering is the problem of estimating the scene parameters such as shapes, materials, or illuminations from a sequence of images captured under different viewpoints.
We propose Neural Incident Stokes Fields (NeISF), a multi-view inverse framework that reduces ambiguities using polarization cues.
arXiv Detail & Related papers (2023-11-22T06:28:30Z) - Convex Latent-Optimized Adversarial Regularizers for Imaging Inverse
Problems [8.33626757808923]
We introduce Convex Latent-d Adrial Regularizers (CLEAR), a novel and interpretable data-driven paradigm.
CLEAR represents a fusion of deep learning (DL) and variational regularization.
Our method consistently outperforms conventional data-driven techniques and traditional regularization approaches.
arXiv Detail & Related papers (2023-09-17T12:06:04Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - Deep unfolding as iterative regularization for imaging inverse problems [6.485466095579992]
Deep unfolding methods guide the design of deep neural networks (DNNs) through iterative algorithms.
We prove that the unfolded DNN will converge to it stably.
We demonstrate with an example of MRI reconstruction that the proposed method outperforms conventional unfolding methods.
arXiv Detail & Related papers (2022-11-24T07:38:47Z) - Learning to Optimize with Stochastic Dominance Constraints [103.26714928625582]
In this paper, we develop a simple yet efficient approach for the problem of comparing uncertain quantities.
We recast inner optimization in the Lagrangian as a learning problem for surrogate approximation, which bypasses apparent intractability.
The proposed light-SD demonstrates superior performance on several representative problems ranging from finance to supply chain management.
arXiv Detail & Related papers (2022-11-14T21:54:31Z) - Denoising Diffusion Restoration Models [110.1244240726802]
Denoising Diffusion Restoration Models (DDRM) is an efficient, unsupervised posterior sampling method.
We demonstrate DDRM's versatility on several image datasets for super-resolution, deblurring, inpainting, and colorization.
arXiv Detail & Related papers (2022-01-27T20:19:07Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - Solving Linear Inverse Problems Using the Prior Implicit in a Denoiser [7.7288480250888]
We develop a robust and general methodology for making use of implicit priors in deep neural networks.
A CNN trained to perform blind (i.e., with unknown noise level) least-squares denoising is presented.
A generalization of this algorithm to constrained sampling provides a method for using the implicit prior to solve any linear inverse problem.
arXiv Detail & Related papers (2020-07-27T15:40:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.