DenoMamba: A fused state-space model for low-dose CT denoising
- URL: http://arxiv.org/abs/2409.13094v1
- Date: Thu, 19 Sep 2024 21:32:07 GMT
- Title: DenoMamba: A fused state-space model for low-dose CT denoising
- Authors: Şaban Öztürk, Oğuz Can Duran, Tolga Çukur,
- Abstract summary: Low-dose computed tomography (LDCT) lower potential risks linked to radiation exposure.
LDCT denoising is based on neural network models that learn data-driven image priors to separate noise evoked by dose reduction from underlying tissue signals.
DenoMamba is a novel denoising method based on state-space modeling (SSM) that efficiently captures short- and long-range context in medical images.
- Score: 6.468495781611433
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-dose computed tomography (LDCT) lower potential risks linked to radiation exposure while relying on advanced denoising algorithms to maintain diagnostic quality in reconstructed images. The reigning paradigm in LDCT denoising is based on neural network models that learn data-driven image priors to separate noise evoked by dose reduction from underlying tissue signals. Naturally, the fidelity of these priors depend on the model's ability to capture the broad range of contextual features evident in CT images. Earlier convolutional neural networks (CNN) are highly adept at efficiently capturing short-range spatial context, but their limited receptive fields reduce sensitivity to interactions over longer distances. Although transformers based on self-attention mechanisms have recently been posed to increase sensitivity to long-range context, they can suffer from suboptimal performance and efficiency due to elevated model complexity, particularly for high-resolution CT images. For high-quality restoration of LDCT images, here we introduce DenoMamba, a novel denoising method based on state-space modeling (SSM), that efficiently captures short- and long-range context in medical images. Following an hourglass architecture with encoder-decoder stages, DenoMamba employs a spatial SSM module to encode spatial context and a novel channel SSM module equipped with a secondary gated convolution network to encode latent features of channel context at each stage. Feature maps from the two modules are then consolidated with low-level input features via a convolution fusion module (CFM). Comprehensive experiments on LDCT datasets with 25\% and 10\% dose reduction demonstrate that DenoMamba outperforms state-of-the-art denoisers with average improvements of 1.4dB PSNR, 1.1% SSIM, and 1.6% RMSE in recovered image quality.
Related papers
- A Flow-based Truncated Denoising Diffusion Model for Super-resolution Magnetic Resonance Spectroscopic Imaging [34.32290273033808]
This work introduces a Flow-based Truncated Denoising Diffusion Model for super-resolution MRSI.
It shortens the diffusion process by truncating the diffusion chain, and the truncated steps are estimated using a normalizing flow-based network.
We demonstrate that FTDDM outperforms existing generative models while speeding up the sampling process by over 9-fold.
arXiv Detail & Related papers (2024-10-25T03:42:35Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - WIA-LD2ND: Wavelet-based Image Alignment for Self-supervised Low-Dose CT Denoising [74.14134385961775]
We introduce a novel self-supervised CT image denoising method called WIA-LD2ND, only using NDCT data.
WIA-LD2ND comprises two modules: Wavelet-based Image Alignment (WIA) and Frequency-Aware Multi-scale Loss (FAM)
arXiv Detail & Related papers (2024-03-18T11:20:11Z) - Low-dose CT Denoising with Language-engaged Dual-space Alignment [21.172319554618497]
We propose a plug-and-play Language-Engaged Dual-space Alignment loss (LEDA) to optimize low-dose CT denoising models.
Our idea is to leverage large language models (LLMs) to align denoised CT and normal dose CT images in both the continuous perceptual space and discrete semantic space.
LEDA involves two steps: the first is to pretrain an LLM-guided CT autoencoder, which can encode a CT image into continuous high-level features and quantize them into a token space to produce semantic tokens.
arXiv Detail & Related papers (2024-03-10T08:21:50Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - Denoising Simulated Low-Field MRI (70mT) using Denoising Autoencoders
(DAE) and Cycle-Consistent Generative Adversarial Networks (Cycle-GAN) [68.8204255655161]
Cycle Consistent Generative Adversarial Network (GAN) is implemented to yield high-field, high resolution, high signal-to-noise ratio (SNR) Magnetic Resonance Imaging (MRI) images.
Images were utilized to train a Denoising Autoencoder (DAE) and a Cycle-GAN, with paired and unpaired cases.
This work demonstrates the use of a generative deep learning model that can outperform classical DAEs to improve low-field MRI images and does not require image pairs.
arXiv Detail & Related papers (2023-07-12T00:01:00Z) - CoreDiff: Contextual Error-Modulated Generalized Diffusion Model for
Low-Dose CT Denoising and Generalization [41.64072751889151]
Low-dose computed tomography (LDCT) images suffer from noise and artifacts due to photon starvation and electronic noise.
This paper presents a novel COntextual eRror-modulated gEneralized Diffusion model for low-dose CT (LDCT) denoising, termed CoreDiff.
arXiv Detail & Related papers (2023-04-04T14:13:13Z) - Self Supervised Low Dose Computed Tomography Image Denoising Using
Invertible Network Exploiting Inter Slice Congruence [20.965610734723636]
This study proposes a novel method for self-supervised low-dose CT denoising to alleviate the requirement of paired LDCT and NDCT images.
We have trained an invertible neural network to minimize the pixel-based mean square distance between a noisy slice and the average of its two immediate adjacent noisy slices.
arXiv Detail & Related papers (2022-11-03T07:16:18Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - Noise Conscious Training of Non Local Neural Network powered by Self
Attentive Spectral Normalized Markovian Patch GAN for Low Dose CT Denoising [20.965610734723636]
Deep learning-based technique has emerged as a dominant method for low dose CT(LDCT) denoising.
In this study, we propose a novel convolutional module as the first attempt to utilize neighborhood similarity of CT images for denoising tasks.
Next, we moved towards the problem of non-stationarity of CT noise and introduced a new noise aware mean square error loss for LDCT denoising.
arXiv Detail & Related papers (2020-11-11T10:44:52Z) - Multifold Acceleration of Diffusion MRI via Slice-Interleaved Diffusion
Encoding (SIDE) [50.65891535040752]
We propose a diffusion encoding scheme, called Slice-Interleaved Diffusion.
SIDE, that interleaves each diffusion-weighted (DW) image volume with slices encoded with different diffusion gradients.
We also present a method based on deep learning for effective reconstruction of DW images from the highly slice-undersampled data.
arXiv Detail & Related papers (2020-02-25T14:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.