Deep neural networks-based denoising models for CT imaging and their
efficacy
- URL: http://arxiv.org/abs/2111.09539v1
- Date: Thu, 18 Nov 2021 06:18:26 GMT
- Title: Deep neural networks-based denoising models for CT imaging and their
efficacy
- Authors: Prabhat KC, Rongping Zeng, M. Mehdi Farhangi, Kyle J. Myers
- Abstract summary: We aim to examine the image quality of the Deep Neural Networks (DNNs) results from a holistic viewpoint for low-dose CT image denoising.
We build a library of advanced DNN denoising architectures such as the DnCNN, U-Net, Red-Net, GAN, etc.
Each network is modeled, as well as trained, such that it yields its best performance in terms of the PSNR and SSIM.
- Score: 0.3058685580689604
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Most of the Deep Neural Networks (DNNs) based CT image denoising literature
shows that DNNs outperform traditional iterative methods in terms of metrics
such as the RMSE, the PSNR and the SSIM. In many instances, using the same
metrics, the DNN results from low-dose inputs are also shown to be comparable
to their high-dose counterparts. However, these metrics do not reveal if the
DNN results preserve the visibility of subtle lesions or if they alter the CT
image properties such as the noise texture. Accordingly, in this work, we seek
to examine the image quality of the DNN results from a holistic viewpoint for
low-dose CT image denoising. First, we build a library of advanced DNN
denoising architectures. This library is comprised of denoising architectures
such as the DnCNN, U-Net, Red-Net, GAN, etc. Next, each network is modeled, as
well as trained, such that it yields its best performance in terms of the PSNR
and SSIM. As such, data inputs (e.g. training patch-size, reconstruction
kernel) and numeric-optimizer inputs (e.g. minibatch size, learning rate, loss
function) are accordingly tuned. Finally, outputs from thus trained networks
are further subjected to a series of CT bench testing metrics such as the
contrast-dependent MTF, the NPS and the HU accuracy. These metrics are employed
to perform a more nuanced study of the resolution of the DNN outputs'
low-contrast features, their noise textures, and their CT number accuracy to
better understand the impact each DNN algorithm has on these underlying
attributes of image quality.
Related papers
- QS-ADN: Quasi-Supervised Artifact Disentanglement Network for Low-Dose
CT Image Denoising by Local Similarity Among Unpaired Data [10.745277107045949]
This paper introduces a new learning mode, called quasi-supervised learning, to empower the ADN for LDCT image denoising.
The proposed method is different from (but compatible with) supervised and semi-supervised learning modes and can be easily implemented by modifying existing networks.
The experimental results show that the method is competitive with state-of-the-art methods in terms of noise suppression and contextual fidelity.
arXiv Detail & Related papers (2023-02-08T07:19:13Z) - Decoupled Mixup for Generalized Visual Recognition [71.13734761715472]
We propose a novel "Decoupled-Mixup" method to train CNN models for visual recognition.
Our method decouples each image into discriminative and noise-prone regions, and then heterogeneously combines these regions to train CNN models.
Experiment results show the high generalization performance of our method on testing data that are composed of unseen contexts.
arXiv Detail & Related papers (2022-10-26T15:21:39Z) - Signal Processing for Implicit Neural Representations [80.38097216996164]
Implicit Neural Representations (INRs) encode continuous multi-media data via multi-layer perceptrons.
Existing works manipulate such continuous representations via processing on their discretized instance.
We propose an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR.
arXiv Detail & Related papers (2022-10-17T06:29:07Z) - Multi-stage image denoising with the wavelet transform [125.2251438120701]
Deep convolutional neural networks (CNNs) are used for image denoising via automatically mining accurate structure information.
We propose a multi-stage image denoising CNN with the wavelet transform (MWDCNN) via three stages, i.e., a dynamic convolutional block (DCB), two cascaded wavelet transform and enhancement blocks (WEBs) and residual block (RB)
arXiv Detail & Related papers (2022-09-26T03:28:23Z) - Limited Parameter Denoising for Low-dose X-ray Computed Tomography Using
Deep Reinforcement Learning [7.909848251752742]
We introduce a novel CT denoising framework, which has interpretable behaviour, and provides useful results with limited data.
Our experiments were carried out on abdominal scans for the Mayo Clinic TCIA dataset, and the AAPM Low Dose CT Grand Challenge.
Our denoising framework has excellent denoising performance increasing the PSNR from 28.53 to 28.93, and increasing the SSIM from 0.8952 to 0.9204.
arXiv Detail & Related papers (2022-03-28T14:30:43Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Convolutional versus Self-Organized Operational Neural Networks for
Real-World Blind Image Denoising [25.31981236136533]
We tackle the real-world blind image denoising problem by employing, for the first time, a deep Self-ONN.
Deep Self-ONNs consistently achieve superior results with performance gains of up to 1.76dB in PSNR.
arXiv Detail & Related papers (2021-03-04T14:49:17Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z) - EDCNN: Edge enhancement-based Densely Connected Network with Compound
Loss for Low-Dose CT Denoising [27.86840312836051]
We propose the Edge enhancement based Densely connected Convolutional Neural Network (EDCNN)
We construct a model with dense connections to fuse the extracted edge information and realize end-to-end image denoising.
Compared with the existing low-dose CT image denoising algorithms, our proposed model has a better performance in preserving details and suppressing noise.
arXiv Detail & Related papers (2020-10-30T23:12:09Z) - DCT-SNN: Using DCT to Distribute Spatial Information over Time for
Learning Low-Latency Spiking Neural Networks [7.876001630578417]
Spiking Neural Networks (SNNs) offer a promising alternative to traditional deep learning frameworks.
SNNs suffer from high inference latency which is a major bottleneck to their deployment.
We propose a scalable time-based encoding scheme that utilizes the Discrete Cosine Transform (DCT) to reduce the number of timesteps required for inference.
arXiv Detail & Related papers (2020-10-05T05:55:34Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.