Multi-Slice Fusion for Sparse-View and Limited-Angle 4D CT
Reconstruction
- URL: http://arxiv.org/abs/2008.01567v3
- Date: Sat, 20 Feb 2021 01:06:06 GMT
- Title: Multi-Slice Fusion for Sparse-View and Limited-Angle 4D CT
Reconstruction
- Authors: Soumendu Majee, Thilo Balke, Craig A.J. Kemp, Gregery T. Buzzard,
Charles A. Bouman
- Abstract summary: We present multi-slice fusion, a novel algorithm for 4D reconstruction based on the fusion of multiple low-dimensional denoisers.
We implement multi-slice fusion on distributed, heterogeneous clusters in order to reconstruct large 4D volumes in reasonable time.
- Score: 3.045887205265198
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inverse problems spanning four or more dimensions such as space, time and
other independent parameters have become increasingly important.
State-of-the-art 4D reconstruction methods use model based iterative
reconstruction (MBIR), but depend critically on the quality of the prior
modeling. Recently, plug-and-play (PnP) methods have been shown to be an
effective way to incorporate advanced prior models using state-of-the-art
denoising algorithms. However, state-of-the-art denoisers such as BM4D and deep
convolutional neural networks (CNNs) are primarily available for 2D or 3D
images and extending them to higher dimensions is difficult due to algorithmic
complexity and the increased difficulty of effective training.
In this paper, we present multi-slice fusion, a novel algorithm for 4D
reconstruction, based on the fusion of multiple low-dimensional denoisers. Our
approach uses multi-agent consensus equilibrium (MACE), an extension of
plug-and-play, as a framework for integrating the multiple lower-dimensional
models. We apply our method to 4D cone-beam X-ray CT reconstruction for non
destructive evaluation (NDE) of samples that are dynamically moving during
acquisition. We implement multi-slice fusion on distributed, heterogeneous
clusters in order to reconstruct large 4D volumes in reasonable time and
demonstrate the inherent parallelizable nature of the algorithm. We present
simulated and real experimental results on sparse-view and limited-angle CT
data to demonstrate that multi-slice fusion can substantially improve the
quality of reconstructions relative to traditional methods, while also being
practical to implement and train.
Related papers
- 4Diffusion: Multi-view Video Diffusion Model for 4D Generation [55.82208863521353]
Current 4D generation methods have achieved noteworthy efficacy with the aid of advanced diffusion generative models.
We propose a novel 4D generation pipeline, namely 4Diffusion, aimed at generating spatial-temporally consistent 4D content from a monocular video.
arXiv Detail & Related papers (2024-05-31T08:18:39Z) - EG4D: Explicit Generation of 4D Object without Score Distillation [105.63506584772331]
DG4D is a novel framework that generates high-quality and consistent 4D assets without score distillation.
Our framework outperforms the baselines in generation quality by a considerable margin.
arXiv Detail & Related papers (2024-05-28T12:47:22Z) - Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models [116.31344506738816]
We present a novel framework, textbfDiffusion4D, for efficient and scalable 4D content generation.
We develop a 4D-aware video diffusion model capable of synthesizing orbital views of dynamic 3D assets.
Our method surpasses prior state-of-the-art techniques in terms of generation efficiency and 4D geometry consistency.
arXiv Detail & Related papers (2024-05-26T17:47:34Z) - Distributed Stochastic Optimization of a Neural Representation Network for Time-Space Tomography Reconstruction [4.689071714940848]
4D time-space reconstruction of dynamic events or deforming objects using Xray computed tomography (CT) is an extremely ill-posed inverse problem.
Existing approaches assume that the object remains static for the duration of several tens or hundreds of X-ray projection measurement images.
We propose to perform a 4D time-space reconstruction using a distributed implicit neural representation network that is trained using a novel distributed training algorithm.
arXiv Detail & Related papers (2024-04-29T19:41:51Z) - MVD-Fusion: Single-view 3D via Depth-consistent Multi-view Generation [54.27399121779011]
We present MVD-Fusion: a method for single-view 3D inference via generative modeling of multi-view-consistent RGB-D images.
We show that our approach can yield more accurate synthesis compared to recent state-of-the-art, including distillation-based 3D inference and prior multi-view generation methods.
arXiv Detail & Related papers (2024-04-04T17:59:57Z) - Diffusion$^2$: Dynamic 3D Content Generation via Score Composition of Video and Multi-view Diffusion Models [6.738732514502613]
Diffusion$2$ is a novel framework for dynamic 3D content creation.
It reconciles the knowledge about geometric consistency and temporal smoothness from 3D models to directly sample dense multi-view images.
Experiments demonstrate the efficacy of our proposed framework in generating highly seamless and consistent 4D assets.
arXiv Detail & Related papers (2024-04-02T17:58:03Z) - MIRT: a simultaneous reconstruction and affine motion compensation
technique for four dimensional computed tomography (4DCT) [3.5343621383192128]
In four-dimensional computed tomography (4DCT), 3D images of moving or deforming samples are reconstructed from a set of 2D projection images.
Recent techniques for iterative motion-compensated reconstruction either necessitate a reference acquisition or alternate image reconstruction and motion estimation steps.
We propose the Motion-compensated Iterative Reconstruction Technique (MIRT)- an efficient iterative reconstruction scheme that combines image reconstruction and affine motion estimation in a single update step.
arXiv Detail & Related papers (2024-02-07T00:10:39Z) - Motion2VecSets: 4D Latent Vector Set Diffusion for Non-rigid Shape Reconstruction and Tracking [52.393359791978035]
Motion2VecSets is a 4D diffusion model for dynamic surface reconstruction from point cloud sequences.
We parameterize 4D dynamics with latent sets instead of using global latent codes.
For more temporally-coherent object tracking, we synchronously denoise deformation latent sets and exchange information across multiple frames.
arXiv Detail & Related papers (2024-01-12T15:05:08Z) - End-to-End Multi-View Structure-from-Motion with Hypercorrelation
Volumes [7.99536002595393]
Deep learning techniques have been proposed to tackle this problem.
We improve on the state-of-the-art two-view structure-from-motion(SfM) approach.
We extend it to the general multi-view case and evaluate it on the complex benchmark dataset DTU.
arXiv Detail & Related papers (2022-09-14T20:58:44Z) - LoRD: Local 4D Implicit Representation for High-Fidelity Dynamic Human
Modeling [69.56581851211841]
We propose a novel Local 4D implicit Representation for Dynamic clothed human, named LoRD.
Our key insight is to encourage the network to learn the latent codes of local part-level representation.
LoRD has strong capability for representing 4D human, and outperforms state-of-the-art methods on practical applications.
arXiv Detail & Related papers (2022-08-18T03:49:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.