Noise2Filter: fast, self-supervised learning and real-time
reconstruction for 3D Computed Tomography
- URL: http://arxiv.org/abs/2007.01636v1
- Date: Fri, 3 Jul 2020 12:12:10 GMT
- Title: Noise2Filter: fast, self-supervised learning and real-time
reconstruction for 3D Computed Tomography
- Authors: Marinus J. Lagerwerf, Allard A. Hendriksen, Jan-Willem Buurlage and K.
Joost Batenburg
- Abstract summary: At X-ray beamlines, the achievable time-resolution for 3D tomographic imaging of the interior of an object has been reduced to a fraction of a second.
We propose Noise2Filter, a learned filter method that can be trained using only the measured data.
We show limited loss of accuracy compared to training with additional training data, and improved accuracy compared to standard filter-based methods.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: At X-ray beamlines of synchrotron light sources, the achievable
time-resolution for 3D tomographic imaging of the interior of an object has
been reduced to a fraction of a second, enabling rapidly changing structures to
be examined. The associated data acquisition rates require sizable
computational resources for reconstruction. Therefore, full 3D reconstruction
of the object is usually performed after the scan has completed. Quasi-3D
reconstruction -- where several interactive 2D slices are computed instead of a
3D volume -- has been shown to be significantly more efficient, and can enable
the real-time reconstruction and visualization of the interior. However,
quasi-3D reconstruction relies on filtered backprojection type algorithms,
which are typically sensitive to measurement noise. To overcome this issue, we
propose Noise2Filter, a learned filter method that can be trained using only
the measured data, and does not require any additional training data. This
method combines quasi-3D reconstruction, learned filters, and self-supervised
learning to derive a tomographic reconstruction method that can be trained in
under a minute and evaluated in real-time. We show limited loss of accuracy
compared to training with additional training data, and improved accuracy
compared to standard filter-based methods.
Related papers
- Visual SLAM with 3D Gaussian Primitives and Depth Priors Enabling Novel View Synthesis [11.236094544193605]
Conventional geometry-based SLAM systems lack dense 3D reconstruction capabilities.
We propose a real-time RGB-D SLAM system that incorporates a novel view synthesis technique, 3D Gaussian Splatting.
arXiv Detail & Related papers (2024-08-10T21:23:08Z) - R$^2$-Gaussian: Rectifying Radiative Gaussian Splatting for Tomographic Reconstruction [53.19869886963333]
3D Gaussian splatting (3DGS) has shown promising results in rendering image and surface reconstruction.
This paper introduces R2$-Gaussian, the first 3DGS-based framework for sparse-view tomographic reconstruction.
arXiv Detail & Related papers (2024-05-31T08:39:02Z) - A Deep Learning Method for Simultaneous Denoising and Missing Wedge Reconstruction in Cryogenic Electron Tomography [23.75819355889607]
We propose a deep-learning approach for simultaneous denoising and missing wedge reconstruction called DeepDeWedge.
The algorithm requires no ground truth data and is based on fitting a neural network to the 2D projections using a self-supervised loss.
arXiv Detail & Related papers (2023-11-09T17:34:57Z) - Farm3D: Learning Articulated 3D Animals by Distilling 2D Diffusion [67.71624118802411]
We present Farm3D, a method for learning category-specific 3D reconstructors for articulated objects.
We propose a framework that uses an image generator, such as Stable Diffusion, to generate synthetic training data.
Our network can be used for analysis, including monocular reconstruction, or for synthesis, generating articulated assets for real-time applications such as video games.
arXiv Detail & Related papers (2023-04-20T17:59:34Z) - Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction [53.93674177236367]
Cone Beam Computed Tomography (CBCT) plays a vital role in clinical imaging.
Traditional methods typically require hundreds of 2D X-ray projections to reconstruct a high-quality 3D CBCT image.
This has led to a growing interest in sparse-view CBCT reconstruction to reduce radiation doses.
We introduce a novel geometry-aware encoder-decoder framework to solve this problem.
arXiv Detail & Related papers (2023-03-26T14:38:42Z) - Simulator-Based Self-Supervision for Learned 3D Tomography
Reconstruction [34.93595625809309]
Prior machine learning approaches require reference reconstructions computed by another algorithm for training.
We train our model in a fully self-supervised manner using only noisy 2D X-ray data.
Our results show significantly higher visual fidelity and better PSNR over techniques that rely on existing reconstructions.
arXiv Detail & Related papers (2022-12-14T13:21:37Z) - A Self-Supervised Approach to Reconstruction in Sparse X-Ray Computed
Tomography [1.0806206850043696]
This work develops and validates a self-supervised probabilistic deep learning technique, the physics-informed variational autoencoder.
Deep neural networks have been used to transform sparse 2-D projection measurements to a 3-D reconstruction by training on a dataset of known similar objects.
High-quality reconstructions cannot be generated without deep learning, and the deep neural network cannot be learned without the reconstructions.
arXiv Detail & Related papers (2022-10-30T02:33:45Z) - SCFusion: Real-time Incremental Scene Reconstruction with Semantic
Completion [86.77318031029404]
We propose a framework that performs scene reconstruction and semantic scene completion jointly in an incremental and real-time manner.
Our framework relies on a novel neural architecture designed to process occupancy maps and leverages voxel states to accurately and efficiently fuse semantic completion with the 3D global model.
arXiv Detail & Related papers (2020-10-26T15:31:52Z) - Neural Descent for Visual 3D Human Pose and Shape [67.01050349629053]
We present deep neural network methodology to reconstruct the 3d pose and shape of people, given an input RGB image.
We rely on a recently introduced, expressivefull body statistical 3d human model, GHUM, trained end-to-end.
Central to our methodology, is a learning to learn and optimize approach, referred to as HUmanNeural Descent (HUND), which avoids both second-order differentiation.
arXiv Detail & Related papers (2020-08-16T13:38:41Z) - Procrustean Regression Networks: Learning 3D Structure of Non-Rigid
Objects from 2D Annotations [42.476537776831314]
We propose a novel framework for training neural networks which is capable of learning 3D information of non-rigid objects.
The proposed framework shows superior reconstruction performance to the state-of-the-art method on the Human 3.6M, 300-VW, and SURREAL datasets.
arXiv Detail & Related papers (2020-07-21T17:29:20Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.