A Learning-Based 3D EIT Image Reconstruction Method
- URL: http://arxiv.org/abs/2208.14449v1
- Date: Tue, 30 Aug 2022 12:00:43 GMT
- Title: A Learning-Based 3D EIT Image Reconstruction Method
- Authors: Zhaoguang Yi, Zhou Chen, and Yunjie Yang
- Abstract summary: This paper presents a learning-based approach for 3D EIT image reconstruction, which is named Transposed convolution with Neurons Network (TN-Net)
Simulation and experimental results show the superior performance and generalization ability of TN-Net compared with prevailing 3D EIT image reconstruction algorithms.
- Score: 3.2116198597240846
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning has been widely employed to solve the Electrical Impedance
Tomography (EIT) image reconstruction problem. Most existing physical
model-based and learning-based approaches focus on 2D EIT image reconstruction.
However, when they are directly extended to the 3D domain, the reconstruction
performance in terms of image quality and noise robustness is hardly guaranteed
mainly due to the significant increase in dimensionality. This paper presents a
learning-based approach for 3D EIT image reconstruction, which is named
Transposed convolution with Neurons Network (TN-Net). Simulation and
experimental results show the superior performance and generalization ability
of TN-Net compared with prevailing 3D EIT image reconstruction algorithms.
Related papers
- Tada-DIP: Input-adaptive Deep Image Prior for One-shot 3D Image Reconstruction [14.275526906868622]
We introduce Tada-DIP, a highly effective and fully 3D DIP method for solving 3D inverse problems.<n>By combining input-adaptation and denoising regularization, Tada-DIP produces high-quality 3D reconstructions.<n>Experiments on sparse-view X-ray computed tomography reconstruction validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2025-12-03T16:56:38Z) - HORT: Monocular Hand-held Objects Reconstruction with Transformers [61.36376511119355]
Reconstructing hand-held objects in 3D from monocular images is a significant challenge in computer vision.
We propose a transformer-based model to efficiently reconstruct dense 3D point clouds of hand-held objects.
Our method achieves state-of-the-art accuracy with much faster inference speed, while generalizing well to in-the-wild images.
arXiv Detail & Related papers (2025-03-27T09:45:09Z) - A Generative Approach to High Fidelity 3D Reconstruction from Text Data [0.0]
This research proposes a fully automated pipeline that seamlessly integrates text-to-image generation, various image processing techniques, and deep learning methods for reflection removal and 3D reconstruction.
By leveraging state-of-the-art generative models like Stable Diffusion, the methodology translates natural language inputs into detailed 3D models through a multi-stage workflow.
This approach addresses key challenges in generative reconstruction, such as maintaining semantic coherence, managing geometric complexity, and preserving detailed visual information.
arXiv Detail & Related papers (2025-03-05T16:54:15Z) - Frequency-based View Selection in Gaussian Splatting Reconstruction [9.603843571051744]
We investigate the problem of active view selection to perform 3D Gaussian Splatting reconstructions with as few input images as possible.
By ranking the potential views in the frequency domain, we are able to effectively estimate the potential information gain of new viewpoints.
Our method achieves state-of-the-art results in view selection, demonstrating its potential for efficient image-based 3D reconstruction.
arXiv Detail & Related papers (2024-09-24T21:44:26Z) - GTR: Improving Large 3D Reconstruction Models through Geometry and Texture Refinement [51.97726804507328]
We propose a novel approach for 3D mesh reconstruction from multi-view images.
Our method takes inspiration from large reconstruction models that use a transformer-based triplane generator and a Neural Radiance Field (NeRF) model trained on multi-view images.
arXiv Detail & Related papers (2024-06-09T05:19:24Z) - SADIR: Shape-Aware Diffusion Models for 3D Image Reconstruction [2.2954246824369218]
3D image reconstruction from a limited number of 2D images has been a long-standing challenge in computer vision and image analysis.
We propose a shape-aware network based on diffusion models for 3D image reconstruction, named SADIR, to address these issues.
arXiv Detail & Related papers (2023-09-06T19:30:22Z) - End-to-End Multi-View Structure-from-Motion with Hypercorrelation
Volumes [7.99536002595393]
Deep learning techniques have been proposed to tackle this problem.
We improve on the state-of-the-art two-view structure-from-motion(SfM) approach.
We extend it to the general multi-view case and evaluate it on the complex benchmark dataset DTU.
arXiv Detail & Related papers (2022-09-14T20:58:44Z) - Single-view 3D Mesh Reconstruction for Seen and Unseen Categories [69.29406107513621]
Single-view 3D Mesh Reconstruction is a fundamental computer vision task that aims at recovering 3D shapes from single-view RGB images.
This paper tackles Single-view 3D Mesh Reconstruction, to study the model generalization on unseen categories.
We propose an end-to-end two-stage network, GenMesh, to break the category boundaries in reconstruction.
arXiv Detail & Related papers (2022-08-04T14:13:35Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Translational Symmetry-Aware Facade Parsing for 3D Building
Reconstruction [11.263458202880038]
In this paper, we present a novel translational symmetry-based approach to improving the deep neural networks.
We propose a novel scheme to fuse anchor-free detection in a single stage network, which enables the efficient training and better convergence.
We employ an off-the-shelf rendering engine like Blender to reconstruct the realistic high-quality 3D models using procedural modeling.
arXiv Detail & Related papers (2021-06-02T03:10:51Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z) - NAS-DIP: Learning Deep Image Prior with Neural Architecture Search [65.79109790446257]
Recent work has shown that the structure of deep convolutional neural networks can be used as a structured image prior.
We propose to search for neural architectures that capture stronger image priors.
We search for an improved network by leveraging an existing neural architecture search algorithm.
arXiv Detail & Related papers (2020-08-26T17:59:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.