Diff-INR: Generative Regularization for Electrical Impedance Tomography
- URL: http://arxiv.org/abs/2409.04494v2
- Date: Tue, 10 Sep 2024 07:40:06 GMT
- Title: Diff-INR: Generative Regularization for Electrical Impedance Tomography
- Authors: Bowen Tong, Junwu Wang, Dong Liu,
- Abstract summary: Electrical Impedance Tomography (EIT) reconstructs conductivity distributions within a body from boundary measurements.
EIT reconstruction is hindered by its ill-posed nonlinear inverse problem, which complicates accurate results.
We propose Diff-INR, a novel method that combines generative regularization with Implicit Neural Representations (INR) through a diffusion model.
- Score: 6.7667436349597985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Electrical Impedance Tomography (EIT) is a non-invasive imaging technique that reconstructs conductivity distributions within a body from boundary measurements. However, EIT reconstruction is hindered by its ill-posed nonlinear inverse problem, which complicates accurate results. To tackle this, we propose Diff-INR, a novel method that combines generative regularization with Implicit Neural Representations (INR) through a diffusion model. Diff-INR introduces geometric priors to guide the reconstruction, effectively addressing the shortcomings of traditional regularization methods. By integrating a pre-trained diffusion regularizer with INR, our approach achieves state-of-the-art reconstruction accuracy in both simulation and experimental data. The method demonstrates robust performance across various mesh densities and hyperparameter settings, highlighting its flexibility and efficiency. This advancement represents a significant improvement in managing the ill-posed nature of EIT. Furthermore, the method's principles are applicable to other imaging modalities facing similar challenges with ill-posed inverse problems.
Related papers
- Effective Diffusion Transformer Architecture for Image Super-Resolution [63.254644431016345]
We design an effective diffusion transformer for image super-resolution (DiT-SR)
In practice, DiT-SR leverages an overall U-shaped architecture, and adopts a uniform isotropic design for all the transformer blocks.
We analyze the limitation of the widely used AdaLN, and present a frequency-adaptive time-step conditioning module.
arXiv Detail & Related papers (2024-09-29T07:14:16Z) - Convex Latent-Optimized Adversarial Regularizers for Imaging Inverse
Problems [8.33626757808923]
We introduce Convex Latent-d Adrial Regularizers (CLEAR), a novel and interpretable data-driven paradigm.
CLEAR represents a fusion of deep learning (DL) and variational regularization.
Our method consistently outperforms conventional data-driven techniques and traditional regularization approaches.
arXiv Detail & Related papers (2023-09-17T12:06:04Z) - SPIRiT-Diffusion: Self-Consistency Driven Diffusion Model for Accelerated MRI [14.545736786515837]
We introduce SPIRiT-Diffusion, a diffusion model for k-space inspired by the iterative self-consistent SPIRiT method.
We evaluate the proposed SPIRiT-Diffusion method using a 3D joint intracranial and carotid vessel wall imaging dataset.
arXiv Detail & Related papers (2023-04-11T08:43:52Z) - Deep unfolding as iterative regularization for imaging inverse problems [6.485466095579992]
Deep unfolding methods guide the design of deep neural networks (DNNs) through iterative algorithms.
We prove that the unfolded DNN will converge to it stably.
We demonstrate with an example of MRI reconstruction that the proposed method outperforms conventional unfolding methods.
arXiv Detail & Related papers (2022-11-24T07:38:47Z) - JPEG Artifact Correction using Denoising Diffusion Restoration Models [110.1244240726802]
We build upon Denoising Diffusion Restoration Models (DDRM) and propose a method for solving some non-linear inverse problems.
We leverage the pseudo-inverse operator used in DDRM and generalize this concept for other measurement operators.
arXiv Detail & Related papers (2022-09-23T23:47:00Z) - Multi-Channel Convolutional Analysis Operator Learning for Dual-Energy
CT Reconstruction [108.06731611196291]
We develop a multi-channel convolutional analysis operator learning (MCAOL) method to exploit common spatial features within attenuation images at different energies.
We propose an optimization method which jointly reconstructs the attenuation images at low and high energies with a mixed norm regularization on the sparse features.
arXiv Detail & Related papers (2022-03-10T14:22:54Z) - Equivariance Regularization for Image Reconstruction [5.025654873456756]
We propose a structure-adaptive regularization scheme for solving imaging inverse problems under incomplete measurements.
This regularization scheme utilizes the equivariant structure in the physics of the measurements to mitigate the ill-poseness of the inverse problem.
Our proposed scheme can be applied in a plug-and-play manner alongside with any classic first-order optimization algorithm.
arXiv Detail & Related papers (2022-02-10T14:38:08Z) - Denoising Diffusion Restoration Models [110.1244240726802]
Denoising Diffusion Restoration Models (DDRM) is an efficient, unsupervised posterior sampling method.
We demonstrate DDRM's versatility on several image datasets for super-resolution, deblurring, inpainting, and colorization.
arXiv Detail & Related papers (2022-01-27T20:19:07Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - Limited-angle tomographic reconstruction of dense layered objects by
dynamical machine learning [68.9515120904028]
Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem.
Regularizing priors are necessary to reduce artifacts by improving the condition of such problems.
We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the building block.
arXiv Detail & Related papers (2020-07-21T11:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.