Implicit Neural Representations for Robust Joint Sparse-View CT Reconstruction
- URL: http://arxiv.org/abs/2405.02509v1
- Date: Fri, 3 May 2024 22:50:59 GMT
- Title: Implicit Neural Representations for Robust Joint Sparse-View CT Reconstruction
- Authors: Jiayang Shi, Junyi Zhu, Daniel M. Pelt, K. Joost Batenburg, Matthew B. Blaschko,
- Abstract summary: Implicit Neural Representations (INRs) have shown promise in addressing sparse-view CT reconstruction.
We propose a novel approach to improve reconstruction quality through joint reconstruction of multiple objects using INRs.
- Score: 12.381526863294061
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computed Tomography (CT) is pivotal in industrial quality control and medical diagnostics. Sparse-view CT, offering reduced ionizing radiation, faces challenges due to its under-sampled nature, leading to ill-posed reconstruction problems. Recent advancements in Implicit Neural Representations (INRs) have shown promise in addressing sparse-view CT reconstruction. Recognizing that CT often involves scanning similar subjects, we propose a novel approach to improve reconstruction quality through joint reconstruction of multiple objects using INRs. This approach can potentially leverage both the strengths of INRs and the statistical regularities across multiple objects. While current INR joint reconstruction techniques primarily focus on accelerating convergence via meta-initialization, they are not specifically tailored to enhance reconstruction quality. To address this gap, we introduce a novel INR-based Bayesian framework integrating latent variables to capture the inter-object relationships. These variables serve as a dynamic reference throughout the optimization, thereby enhancing individual reconstruction fidelity. Our extensive experiments, which assess various key factors such as reconstruction quality, resistance to overfitting, and generalizability, demonstrate significant improvements over baselines in common numerical metrics. This underscores a notable advancement in CT reconstruction methods.
Related papers
- AC-IND: Sparse CT reconstruction based on attenuation coefficient estimation and implicit neural distribution [12.503822675024054]
Computed tomography (CT) reconstruction plays a crucial role in industrial nondestructive testing and medical diagnosis.
Sparse view CT reconstruction aims to reconstruct high-quality CT images while only using a small number of projections.
We introduce AC-IND, a self-supervised method based on Attenuation Coefficient Estimation and Implicit Neural Distribution.
arXiv Detail & Related papers (2024-09-11T10:34:41Z) - DPER: Diffusion Prior Driven Neural Representation for Limited Angle and Sparse View CT Reconstruction [45.00528216648563]
Diffusion Prior Driven Neural Representation (DPER) is an unsupervised framework designed to address the exceptionally ill-posed CT reconstruction inverse problems.
DPER adopts the Half Quadratic Splitting (HQS) algorithm to decompose the inverse problem into data fidelity and distribution prior sub-problems.
We conduct comprehensive experiments to evaluate the performance of DPER on LACT and ultra-SVCT reconstruction with two public datasets.
arXiv Detail & Related papers (2024-04-27T12:55:13Z) - Solving Low-Dose CT Reconstruction via GAN with Local Coherence [2.325977856241404]
We propose a novel approach using generative adversarial networks (GANs) with enhanced local coherence.
The proposed method can capture the local coherence of adjacent images by optical flow, which yields significant improvements in the precision and stability of the constructed images.
arXiv Detail & Related papers (2023-09-24T08:55:42Z) - K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality
Assessment [71.27193056354741]
The problem of how to assess cross-modality medical image synthesis has been largely unexplored.
We propose a new metric K-CROSS to spur progress on this challenging problem.
K-CROSS uses a pre-trained multi-modality segmentation network to predict the lesion location.
arXiv Detail & Related papers (2023-07-10T01:26:48Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - Computed Tomography Reconstruction using Generative Energy-Based Priors [13.634603375405744]
We learn a parametric regularizer with a global receptive field by maximizing it's likelihood on reference CT data.
We apply the regularizer to limited-angle and few-view CT reconstruction problems, where it outperforms traditional reconstruction algorithms by a large margin.
arXiv Detail & Related papers (2022-03-23T18:26:23Z) - Reference-based Magnetic Resonance Image Reconstruction Using Texture
Transforme [86.6394254676369]
We propose a novel Texture Transformer Module (TTM) for accelerated MRI reconstruction.
We formulate the under-sampled data and reference data as queries and keys in a transformer.
The proposed TTM can be stacked on prior MRI reconstruction approaches to further improve their performance.
arXiv Detail & Related papers (2021-11-18T03:06:25Z) - Multi-Modal MRI Reconstruction with Spatial Alignment Network [51.74078260367654]
In clinical practice, magnetic resonance imaging (MRI) with multiple contrasts is usually acquired in a single study.
Recent researches demonstrate that, considering the redundancy between different contrasts or modalities, a target MRI modality under-sampled in the k-space can be better reconstructed with the helps from a fully-sampled sequence.
In this paper, we integrate the spatial alignment network with reconstruction, to improve the quality of the reconstructed target modality.
arXiv Detail & Related papers (2021-08-12T08:46:35Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - Stabilizing Deep Tomographic Reconstruction [25.179542326326896]
We propose an Analytic Compressed Iterative Deep (ACID) framework to address this challenge.
ACID synergizes a deep reconstruction network trained on big data, kernel awareness from CS-inspired processing, and iterative refinement.
Our study demonstrates that the deep reconstruction using ACID is accurate and stable, and sheds light on the converging mechanism of the ACID iteration.
arXiv Detail & Related papers (2020-08-04T21:35:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.