Noise2Filter: fast, self-supervised learning and real-time
reconstruction for 3D Computed Tomography
- URL: http://arxiv.org/abs/2007.01636v1
- Date: Fri, 3 Jul 2020 12:12:10 GMT
- Title: Noise2Filter: fast, self-supervised learning and real-time
reconstruction for 3D Computed Tomography
- Authors: Marinus J. Lagerwerf, Allard A. Hendriksen, Jan-Willem Buurlage and K.
Joost Batenburg
- Abstract summary: At X-ray beamlines, the achievable time-resolution for 3D tomographic imaging of the interior of an object has been reduced to a fraction of a second.
We propose Noise2Filter, a learned filter method that can be trained using only the measured data.
We show limited loss of accuracy compared to training with additional training data, and improved accuracy compared to standard filter-based methods.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: At X-ray beamlines of synchrotron light sources, the achievable
time-resolution for 3D tomographic imaging of the interior of an object has
been reduced to a fraction of a second, enabling rapidly changing structures to
be examined. The associated data acquisition rates require sizable
computational resources for reconstruction. Therefore, full 3D reconstruction
of the object is usually performed after the scan has completed. Quasi-3D
reconstruction -- where several interactive 2D slices are computed instead of a
3D volume -- has been shown to be significantly more efficient, and can enable
the real-time reconstruction and visualization of the interior. However,
quasi-3D reconstruction relies on filtered backprojection type algorithms,
which are typically sensitive to measurement noise. To overcome this issue, we
propose Noise2Filter, a learned filter method that can be trained using only
the measured data, and does not require any additional training data. This
method combines quasi-3D reconstruction, learned filters, and self-supervised
learning to derive a tomographic reconstruction method that can be trained in
under a minute and evaluated in real-time. We show limited loss of accuracy
compared to training with additional training data, and improved accuracy
compared to standard filter-based methods.
Related papers
- A Deep Learning Method for Simultaneous Denoising and Missing Wedge Reconstruction in Cryogenic Electron Tomography [23.75819355889607]
We propose a deep-learning approach for simultaneous denoising and missing wedge reconstruction called DeepDeWedge.
The algorithm requires no ground truth data and is based on fitting a neural network to the 2D projections using a self-supervised loss.
DeepDeWedge performs better than CryoCARE and IsoNet, which are state-of-the-art methods for denoising and missing wedge reconstruction.
arXiv Detail & Related papers (2023-11-09T17:34:57Z) - Farm3D: Learning Articulated 3D Animals by Distilling 2D Diffusion [67.71624118802411]
We present Farm3D, a method for learning category-specific 3D reconstructors for articulated objects.
We propose a framework that uses an image generator, such as Stable Diffusion, to generate synthetic training data.
Our network can be used for analysis, including monocular reconstruction, or for synthesis, generating articulated assets for real-time applications such as video games.
arXiv Detail & Related papers (2023-04-20T17:59:34Z) - BS3D: Building-scale 3D Reconstruction from RGB-D Images [25.604775584883413]
We propose an easy-to-use framework for acquiring building-scale 3D reconstruction using a consumer depth camera.
Unlike complex and expensive acquisition setups, our system enables crowd-sourcing, which can greatly benefit data-hungry algorithms.
arXiv Detail & Related papers (2023-01-03T11:46:14Z) - Simulator-Based Self-Supervision for Learned 3D Tomography
Reconstruction [34.93595625809309]
Prior machine learning approaches require reference reconstructions computed by another algorithm for training.
We train our model in a fully self-supervised manner using only noisy 2D X-ray data.
Our results show significantly higher visual fidelity and better PSNR over techniques that rely on existing reconstructions.
arXiv Detail & Related papers (2022-12-14T13:21:37Z) - A Self-Supervised Approach to Reconstruction in Sparse X-Ray Computed
Tomography [1.0806206850043696]
This work develops and validates a self-supervised probabilistic deep learning technique, the physics-informed variational autoencoder.
Deep neural networks have been used to transform sparse 2-D projection measurements to a 3-D reconstruction by training on a dataset of known similar objects.
High-quality reconstructions cannot be generated without deep learning, and the deep neural network cannot be learned without the reconstructions.
arXiv Detail & Related papers (2022-10-30T02:33:45Z) - RandomRooms: Unsupervised Pre-training from Synthetic Shapes and
Randomized Layouts for 3D Object Detection [138.2892824662943]
A promising solution is to make better use of the synthetic dataset, which consists of CAD object models, to boost the learning on real datasets.
Recent work on 3D pre-training exhibits failure when transfer features learned on synthetic objects to other real-world applications.
In this work, we put forward a new method called RandomRooms to accomplish this objective.
arXiv Detail & Related papers (2021-08-17T17:56:12Z) - Learning to Recover 3D Scene Shape from a Single Image [98.20106822614392]
We propose a two-stage framework that first predicts depth up to an unknown scale and shift from a single monocular image.
We then use 3D point cloud encoders to predict the missing depth shift and focal length that allow us to recover a realistic 3D scene shape.
arXiv Detail & Related papers (2020-12-17T02:35:13Z) - SCFusion: Real-time Incremental Scene Reconstruction with Semantic
Completion [86.77318031029404]
We propose a framework that performs scene reconstruction and semantic scene completion jointly in an incremental and real-time manner.
Our framework relies on a novel neural architecture designed to process occupancy maps and leverages voxel states to accurately and efficiently fuse semantic completion with the 3D global model.
arXiv Detail & Related papers (2020-10-26T15:31:52Z) - Neural Descent for Visual 3D Human Pose and Shape [67.01050349629053]
We present deep neural network methodology to reconstruct the 3d pose and shape of people, given an input RGB image.
We rely on a recently introduced, expressivefull body statistical 3d human model, GHUM, trained end-to-end.
Central to our methodology, is a learning to learn and optimize approach, referred to as HUmanNeural Descent (HUND), which avoids both second-order differentiation.
arXiv Detail & Related papers (2020-08-16T13:38:41Z) - Procrustean Regression Networks: Learning 3D Structure of Non-Rigid
Objects from 2D Annotations [42.476537776831314]
We propose a novel framework for training neural networks which is capable of learning 3D information of non-rigid objects.
The proposed framework shows superior reconstruction performance to the state-of-the-art method on the Human 3.6M, 300-VW, and SURREAL datasets.
arXiv Detail & Related papers (2020-07-21T17:29:20Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.