Simulator-Based Self-Supervision for Learned 3D Tomography
Reconstruction
- URL: http://arxiv.org/abs/2212.07431v2
- Date: Fri, 26 May 2023 10:27:25 GMT
- Title: Simulator-Based Self-Supervision for Learned 3D Tomography
Reconstruction
- Authors: Onni Kosomaa, Samuli Laine, Tero Karras, Miika Aittala, Jaakko
Lehtinen
- Abstract summary: Prior machine learning approaches require reference reconstructions computed by another algorithm for training.
We train our model in a fully self-supervised manner using only noisy 2D X-ray data.
Our results show significantly higher visual fidelity and better PSNR over techniques that rely on existing reconstructions.
- Score: 34.93595625809309
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a deep learning method for 3D volumetric reconstruction in
low-dose helical cone-beam computed tomography. Prior machine learning
approaches require reference reconstructions computed by another algorithm for
training. In contrast, we train our model in a fully self-supervised manner
using only noisy 2D X-ray data. This is enabled by incorporating a fast
differentiable CT simulator in the training loop. As we do not rely on
reference reconstructions, the fidelity of our results is not limited by their
potential shortcomings. We evaluate our method on real helical cone-beam
projections and simulated phantoms. Our results show significantly higher
visual fidelity and better PSNR over techniques that rely on existing
reconstructions. When applied to full-dose data, our method produces
high-quality results orders of magnitude faster than iterative techniques.
Related papers
- R$^2$-Gaussian: Rectifying Radiative Gaussian Splatting for Tomographic Reconstruction [53.19869886963333]
3D Gaussian splatting (3DGS) has shown promising results in rendering image and surface reconstruction.
This paper introduces R2$-Gaussian, the first 3DGS-based framework for sparse-view tomographic reconstruction.
arXiv Detail & Related papers (2024-05-31T08:39:02Z) - Splatter Image: Ultra-Fast Single-View 3D Reconstruction [67.96212093828179]
Splatter Image is based on Gaussian Splatting, which allows fast and high-quality reconstruction of 3D scenes from multiple images.
We learn a neural network that, at test time, performs reconstruction in a feed-forward manner, at 38 FPS.
On several synthetic, real, multi-category and large-scale benchmark datasets, we achieve better results in terms of PSNR, LPIPS, and other metrics while training and evaluating much faster than prior works.
arXiv Detail & Related papers (2023-12-20T16:14:58Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - 3D helical CT Reconstruction with a Memory Efficient Learned Primal-Dual
Architecture [1.3518297878940662]
This paper modifies a domain adapted neural network architecture, the Learned Primal-Dual (LPD), so it can be trained and applied to reconstruction in this setting.
It is the first to apply an unrolled deep learning architecture for reconstruction on full-sized clinical data.
arXiv Detail & Related papers (2022-05-24T10:32:32Z) - Sparse-view Cone Beam CT Reconstruction using Data-consistent Supervised
and Adversarial Learning from Scarce Training Data [27.325532306485755]
As the number of available projections decreases, traditional reconstruction techniques perform poorly.
Deep learning-based reconstruction have garnered a lot of attention in applications because they yield better performance when enough training data is available.
This work focuses on image reconstruction in such settings, when both the number of available CT projections and the training data is extremely limited.
arXiv Detail & Related papers (2022-01-23T17:08:52Z) - Self Context and Shape Prior for Sensorless Freehand 3D Ultrasound
Reconstruction [61.62191904755521]
3D freehand US reconstruction is promising in addressing the problem by providing broad range and freeform scan.
Existing deep learning based methods only focus on the basic cases of skill sequences.
We propose a novel approach to sensorless freehand 3D US reconstruction considering the complex skill sequences.
arXiv Detail & Related papers (2021-07-31T16:06:50Z) - 3D Scattering Tomography by Deep Learning with Architecture Tailored to
Cloud Fields [12.139158398361866]
We present 3DeepCT, a deep neural network for computed tomography, which performs 3D reconstruction of scattering volumes from multi-view images.
We show that 3DeepCT outperforms physics-based inverse scattering methods in term of accuracy as well as offering a significant orders of magnitude improvement in computational time.
arXiv Detail & Related papers (2020-12-10T20:31:44Z) - Learning Deformable Tetrahedral Meshes for 3D Reconstruction [78.0514377738632]
3D shape representations that accommodate learning-based 3D reconstruction are an open problem in machine learning and computer graphics.
Previous work on neural 3D reconstruction demonstrated benefits, but also limitations, of point cloud, voxel, surface mesh, and implicit function representations.
We introduce Deformable Tetrahedral Meshes (DefTet) as a particular parameterization that utilizes volumetric tetrahedral meshes for the reconstruction problem.
arXiv Detail & Related papers (2020-11-03T02:57:01Z) - Self-Supervised Training For Low Dose CT Reconstruction [0.0]
This study defines a training scheme to use low-dose sinograms as their own training targets.
We apply the self-supervision principle in the projection domain where the noise is element-wise independent.
We demonstrate that our method outperforms both conventional and compressed sensing based iterative reconstruction methods.
arXiv Detail & Related papers (2020-10-25T22:02:14Z) - Neural Descent for Visual 3D Human Pose and Shape [67.01050349629053]
We present deep neural network methodology to reconstruct the 3d pose and shape of people, given an input RGB image.
We rely on a recently introduced, expressivefull body statistical 3d human model, GHUM, trained end-to-end.
Central to our methodology, is a learning to learn and optimize approach, referred to as HUmanNeural Descent (HUND), which avoids both second-order differentiation.
arXiv Detail & Related papers (2020-08-16T13:38:41Z) - Noise2Filter: fast, self-supervised learning and real-time
reconstruction for 3D Computed Tomography [0.0]
At X-ray beamlines, the achievable time-resolution for 3D tomographic imaging of the interior of an object has been reduced to a fraction of a second.
We propose Noise2Filter, a learned filter method that can be trained using only the measured data.
We show limited loss of accuracy compared to training with additional training data, and improved accuracy compared to standard filter-based methods.
arXiv Detail & Related papers (2020-07-03T12:12:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.