A Deep Learning Method for Simultaneous Denoising and Missing Wedge Reconstruction in Cryogenic Electron Tomography
- URL: http://arxiv.org/abs/2311.05539v2
- Date: Thu, 11 Apr 2024 13:39:18 GMT
- Title: A Deep Learning Method for Simultaneous Denoising and Missing Wedge Reconstruction in Cryogenic Electron Tomography
- Authors: Simon Wiedemann, Reinhard Heckel,
- Abstract summary: We propose a deep-learning approach for simultaneous denoising and missing wedge reconstruction called DeepDeWedge.
The algorithm requires no ground truth data and is based on fitting a neural network to the 2D projections using a self-supervised loss.
DeepDeWedge performs better than CryoCARE and IsoNet, which are state-of-the-art methods for denoising and missing wedge reconstruction.
- Score: 23.75819355889607
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cryogenic electron tomography is a technique for imaging biological samples in 3D. A microscope collects a series of 2D projections of the sample, and the goal is to reconstruct the 3D density of the sample called the tomogram. Reconstruction is difficult as the 2D projections are noisy and can not be recorded from all directions, resulting in a missing wedge of information. Tomograms conventionally reconstructed with filtered back-projection suffer from noise and strong artifacts due to the missing wedge. Here, we propose a deep-learning approach for simultaneous denoising and missing wedge reconstruction called DeepDeWedge. The algorithm requires no ground truth data and is based on fitting a neural network to the 2D projections using a self-supervised loss. DeepDeWedge performs better than CryoCARE and IsoNet, which are state-of-the-art methods for denoising and missing wedge reconstruction, and similarly and, in some cases, better than the combination of the two methods. At the same time, DeepDeWedge is simpler than this two-step approach, as it does denoising and missing wedge reconstruction simultaneously rather than sequentially.
Related papers
- 3D Vessel Reconstruction from Sparse-View Dynamic DSA Images via Vessel Probability Guided Attenuation Learning [79.60829508459753]
Current commercial Digital Subtraction Angiography (DSA) systems typically demand hundreds of scanning views to perform reconstruction.
The dynamic blood flow and insufficient input of sparse-view DSA images present significant challenges to the 3D vessel reconstruction task.
We propose to use a time-agnostic vessel probability field to solve this problem effectively.
arXiv Detail & Related papers (2024-05-17T11:23:33Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z) - Simulator-Based Self-Supervision for Learned 3D Tomography
Reconstruction [34.93595625809309]
Prior machine learning approaches require reference reconstructions computed by another algorithm for training.
We train our model in a fully self-supervised manner using only noisy 2D X-ray data.
Our results show significantly higher visual fidelity and better PSNR over techniques that rely on existing reconstructions.
arXiv Detail & Related papers (2022-12-14T13:21:37Z) - WNet: A data-driven dual-domain denoising model for sparse-view computed
tomography with a trainable reconstruction layer [3.832032989515628]
We propose WNet, a data-driven dual-domain denoising model which contains a trainable reconstruction layer for sparse-view artifact denoising.
We train and test our network on two clinically relevant datasets and we compare the obtained results with three different types of sparse-view CT denoising and reconstruction algorithms.
arXiv Detail & Related papers (2022-07-01T13:17:01Z) - DH-GAN: A Physics-driven Untrained Generative Adversarial Network for 3D
Microscopic Imaging using Digital Holography [3.4635026053111484]
Digital holography is a 3D imaging technique by emitting a laser beam with a plane wavefront to an object and measuring the intensity of the diffracted waveform, called holograms.
Recently, deep learning (DL) methods have been used for more accurate holographic processing.
We propose a new DL architecture based on generative adversarial networks that uses a discriminative network for realizing a semantic measure for reconstruction quality.
arXiv Detail & Related papers (2022-05-25T17:13:45Z) - 3D Reconstruction of Curvilinear Structures with Stereo Matching
DeepConvolutional Neural Networks [52.710012864395246]
We propose a fully automated pipeline for both detection and matching of curvilinear structures in stereo pairs.
We mainly focus on 3D reconstruction of dislocations from stereo pairs of TEM images.
arXiv Detail & Related papers (2021-10-14T23:05:47Z) - Learning to Recover 3D Scene Shape from a Single Image [98.20106822614392]
We propose a two-stage framework that first predicts depth up to an unknown scale and shift from a single monocular image.
We then use 3D point cloud encoders to predict the missing depth shift and focal length that allow us to recover a realistic 3D scene shape.
arXiv Detail & Related papers (2020-12-17T02:35:13Z) - 3D Dense Geometry-Guided Facial Expression Synthesis by Adversarial
Learning [54.24887282693925]
We propose a novel framework to exploit 3D dense (depth and surface normals) information for expression manipulation.
We use an off-the-shelf state-of-the-art 3D reconstruction model to estimate the depth and create a large-scale RGB-Depth dataset.
Our experiments demonstrate that the proposed method outperforms the competitive baseline and existing arts by a large margin.
arXiv Detail & Related papers (2020-09-30T17:12:35Z) - Noise2Filter: fast, self-supervised learning and real-time
reconstruction for 3D Computed Tomography [0.0]
At X-ray beamlines, the achievable time-resolution for 3D tomographic imaging of the interior of an object has been reduced to a fraction of a second.
We propose Noise2Filter, a learned filter method that can be trained using only the measured data.
We show limited loss of accuracy compared to training with additional training data, and improved accuracy compared to standard filter-based methods.
arXiv Detail & Related papers (2020-07-03T12:12:10Z) - Deep DIH : Statistically Inferred Reconstruction of Digital In-Line
Holography by Deep Learning [1.4619386068190985]
Digital in-line holography is commonly used to reconstruct 3D images from 2D holograms for microscopic objects.
In this paper, we propose a novel implementation of autoencoder-based deep learning architecture for single-shot hologram reconstruction.
arXiv Detail & Related papers (2020-04-25T20:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.