SDEIT: Semantic-Driven Electrical Impedance Tomography
- URL: http://arxiv.org/abs/2504.04185v1
- Date: Sat, 05 Apr 2025 14:08:58 GMT
- Title: SDEIT: Semantic-Driven Electrical Impedance Tomography
- Authors: Dong Liu, Yuanchao Wu, Bowen Tong, Jiansong Deng,
- Abstract summary: We introduce SDEIT, a novel semantic-driven framework that integrates Stable Diffusion 3.5 into EIT.<n>By coupling an implicit neural representation (INR) network with a plug-and-play optimization scheme, SDEIT improves structural consistency and recovers fine details.<n>This work opens a new pathway for integrating multimodal priors into ill-posed inverse problems like EIT.
- Score: 7.872153285062159
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Regularization methods using prior knowledge are essential in solving ill-posed inverse problems such as Electrical Impedance Tomography (EIT). However, designing effective regularization and integrating prior information into EIT remains challenging due to the complexity and variability of anatomical structures. In this work, we introduce SDEIT, a novel semantic-driven framework that integrates Stable Diffusion 3.5 into EIT, marking the first use of large-scale text-to-image generation models in EIT. SDEIT employs natural language prompts as semantic priors to guide the reconstruction process. By coupling an implicit neural representation (INR) network with a plug-and-play optimization scheme that leverages SD-generated images as generative priors, SDEIT improves structural consistency and recovers fine details. Importantly, this method does not rely on paired training datasets, increasing its adaptability to varied EIT scenarios. Extensive experiments on both simulated and experimental data demonstrate that SDEIT outperforms state-of-the-art techniques, offering superior accuracy and robustness. This work opens a new pathway for integrating multimodal priors into ill-posed inverse problems like EIT.
Related papers
- Physics-Driven Neural Compensation For Electrical Impedance Tomography [7.256725037878305]
Electrical Impedance Tomography (EIT) provides a non-invasive, portable imaging modality with significant potential in medical and industrial applications.
EIT faces two primary challenges: the ill-posed nature of its inverse problem and the spatially variable, location-dependent sensitivity distribution.
We propose PhyNC (Physics-driven Neural Compensation), an unsupervised deep learning framework that incorporates the physical principles of EIT.
arXiv Detail & Related papers (2025-04-25T04:44:00Z) - Paving the way for scientific foundation models: enhancing generalization and robustness in PDEs with constraint-aware pre-training [49.8035317670223]
A scientific foundation model (SciFM) is emerging as a promising tool for learning transferable representations across diverse domains.
We propose incorporating PDE residuals into pre-training either as the sole learning signal or in combination with data loss to compensate for limited or infeasible training data.
Our results show that pre-training with PDE constraints significantly enhances generalization, outperforming models trained solely on solution data.
arXiv Detail & Related papers (2025-03-24T19:12:39Z) - Synthetic Data is an Elegant GIFT for Continual Vision-Language Models [52.343627275005026]
GIFT is a novel continual fine-tuning approach to overcome catastrophic forgetting in Vision-Language Models.
We employ a pre-trained diffusion model to recreate both pre-training and learned downstream task data.
Our method consistently outperforms previous state-of-the-art approaches across various settings.
arXiv Detail & Related papers (2025-03-06T09:09:18Z) - BHViT: Binarized Hybrid Vision Transformer [53.38894971164072]
Model binarization has made significant progress in enabling real-time and energy-efficient computation for convolutional neural networks (CNN)<n>We propose BHViT, a binarization-friendly hybrid ViT architecture and its full binarization model with the guidance of three important observations.<n>Our proposed algorithm achieves SOTA performance among binary ViT methods.
arXiv Detail & Related papers (2025-03-04T08:35:01Z) - MR-EIT: Multi-Resolution Reconstruction for Electrical Impedance Tomography via Data-Driven and Unsupervised Dual-Mode Neural Networks [14.303339179604537]
This paper presents a multi-resolution reconstruction method for Electrical Impedance Tomography (EIT)<n>It is capable of operating in both supervised and unsupervised learning modes.<n> Experimental results indicate that MR-EIT outperforms the comparison methods in terms of Structural Similarity (SSIM) and Relative Image Error (RIE)
arXiv Detail & Related papers (2025-03-02T07:06:42Z) - Conditional Diffusion Model for Electrical Impedance Tomography [17.831065873724153]
Electrical impedance tomography (EIT) is a non-invasive imaging technique, which has been widely used in the fields of industrial inspection, medical monitoring and tactile sensing.<n>Due to the inherent non-linearity and ill-conditioned nature of the EIT inverse problem, the reconstructed image is highly sensitive to the measured data, and random noise artifacts often appear in the reconstructed image.<n>A conditional diffusion model with voltage consistency (CDMVC) is proposed in this study to address this issue.
arXiv Detail & Related papers (2025-01-10T07:58:38Z) - Diff-INR: Generative Regularization for Electrical Impedance Tomography [6.7667436349597985]
Electrical Impedance Tomography (EIT) reconstructs conductivity distributions within a body from boundary measurements.
EIT reconstruction is hindered by its ill-posed nonlinear inverse problem, which complicates accurate results.
We propose Diff-INR, a novel method that combines generative regularization with Implicit Neural Representations (INR) through a diffusion model.
arXiv Detail & Related papers (2024-09-06T14:21:23Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - A Decomposition-Based Hybrid Ensemble CNN Framework for Improving
Cross-Subject EEG Decoding Performance [6.762514044136396]
We propose a decomposition-based hybrid ensemble convolutional neural network (CNN) framework to enhance the capability of decoding EEG signals.
Our framework can be simply extended to any CNN architecture and applied in any EEG-related sectors.
arXiv Detail & Related papers (2022-03-14T13:12:31Z) - Learning A 3D-CNN and Transformer Prior for Hyperspectral Image
Super-Resolution [80.93870349019332]
We propose a novel HSISR method that uses Transformer instead of CNN to learn the prior of HSIs.
Specifically, we first use the gradient algorithm to solve the HSISR model, and then use an unfolding network to simulate the iterative solution processes.
arXiv Detail & Related papers (2021-11-27T15:38:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.