U-DuDoNet: Unpaired dual-domain network for CT metal artifact reduction
- URL: http://arxiv.org/abs/2103.04552v1
- Date: Mon, 8 Mar 2021 05:19:15 GMT
- Title: U-DuDoNet: Unpaired dual-domain network for CT metal artifact reduction
- Authors: Yuanyuan Lyu, Jiajun Fu, Cheng Peng, S. Kevin Zhou
- Abstract summary: We propose an unpaired dual-domain network (U-DuDoNet) trained using unpaired data.
Unlike the artifact disentanglement network (ADN), our U-DuDoNet directly models the artifact generation process through additions in both sinogram and image domains.
Our design includes a self-learned sinogram prior net, which provides guidance for restoring the information in the sinogram domain.
- Score: 12.158957925558296
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, both supervised and unsupervised deep learning methods have been
widely applied on the CT metal artifact reduction (MAR) task. Supervised
methods such as Dual Domain Network (Du-DoNet) work well on simulation data;
however, their performance on clinical data is limited due to domain gap.
Unsupervised methods are more generalized, but do not eliminate artifacts
completely through the sole processing on the image domain. To combine the
advantages of both MAR methods, we propose an unpaired dual-domain network
(U-DuDoNet) trained using unpaired data. Unlike the artifact disentanglement
network (ADN) that utilizes multiple encoders and decoders for disentangling
content from artifact, our U-DuDoNet directly models the artifact generation
process through additions in both sinogram and image domains, which is
theoretically justified by an additive property associated with metal artifact.
Our design includes a self-learned sinogram prior net, which provides guidance
for restoring the information in the sinogram domain, and cyclic constraints
for artifact reduction and addition on unpaired data. Extensive experiments on
simulation data and clinical images demonstrate that our novel framework
outperforms the state-of-the-art unpaired approaches.
Related papers
- Semantic-guided Adversarial Diffusion Model for Self-supervised Shadow Removal [5.083330121710086]
GAN-based training often faces issues such as mode collapse and unstable optimization.
We propose a semantic-guided adversarial diffusion framework for self-supervised shadow removal.
We conduct experiments on multiple public datasets, and the experimental results demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2024-07-01T09:14:38Z) - Unsupervised CT Metal Artifact Reduction by Plugging Diffusion Priors in
Dual Domains [8.40564813751161]
metallic implants often cause disruptive artifacts in computed tomography (CT) images, impeding accurate diagnosis.
Several supervised deep learning-based approaches have been proposed for reducing metal artifacts (MAR)
We propose an unsupervised MAR method based on the diffusion model, a generative model with a high capacity to represent data distributions.
arXiv Detail & Related papers (2023-08-31T14:00:47Z) - FreeSeed: Frequency-band-aware and Self-guided Network for Sparse-view
CT Reconstruction [34.91517935951518]
Sparse-view computed tomography (CT) is a promising solution for expediting the scanning process and mitigating radiation exposure to patients.
Recently, deep learning-based image post-processing methods have shown promising results.
We propose a simple yet effective FREquency-band-awarE and SElf-guidED network, termed FreeSeed, which can effectively remove artifact and recover missing detail.
arXiv Detail & Related papers (2023-07-12T03:39:54Z) - TriDoNet: A Triple Domain Model-driven Network for CT Metal Artifact
Reduction [7.959841510571622]
We propose a novel triple domain model-driven CTMAR network, termed as TriDoNet.
We encode non-local repetitive streaking patterns of metal artifacts as an explicit tight frame sparse representation model with adaptive thresholds.
Experimental results show that our TriDoNet can generate superior artifact-reduced CT images.
arXiv Detail & Related papers (2022-11-14T08:28:57Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - Learning MRI Artifact Removal With Unpaired Data [74.48301038665929]
Retrospective artifact correction (RAC) improves image quality post acquisition and enhances image usability.
Recent machine learning driven techniques for RAC are predominantly based on supervised learning.
Here we show that unwanted image artifacts can be disentangled and removed from an image via an RAC neural network learned with unpaired data.
arXiv Detail & Related papers (2021-10-09T16:09:27Z) - DAN-Net: Dual-Domain Adaptive-Scaling Non-local Network for CT Metal
Artifact Reduction [15.225899631788973]
Metal implants can heavily attenuate X-rays in computed tomography (CT) scans, leading to severe artifacts in reconstructed images.
Several network models have been proposed for metal artifact reduction (MAR) in CT.
We present a novel Dual-domain Adaptive-scaling Non-local network (DAN-Net) for MAR.
arXiv Detail & Related papers (2021-02-16T08:09:16Z) - Learning Efficient GANs for Image Translation via Differentiable Masks
and co-Attention Distillation [130.30465659190773]
Generative Adversarial Networks (GANs) have been widely-used in image translation, but their high computation and storage costs impede the deployment on mobile devices.
We introduce a novel GAN compression method, termed DMAD, by proposing a Differentiable Mask and a co-Attention Distillation.
Experiments show DMAD can reduce the Multiply Accumulate Operations (MACs) of CycleGAN by 13x and that of Pix2Pix by 4x while retaining a comparable performance against the full model.
arXiv Detail & Related papers (2020-11-17T02:39:19Z) - Background Adaptive Faster R-CNN for Semi-Supervised Convolutional
Object Detection of Threats in X-Ray Images [64.39996451133268]
We present a semi-supervised approach for threat recognition which we call Background Adaptive Faster R-CNN.
This approach is a training method for two-stage object detectors which uses Domain Adaptation methods from the field of deep learning.
Two domain discriminators, one for discriminating object proposals and one for image features, are adversarially trained to prevent encoding domain-specific information.
This can reduce threat detection false alarm rates by matching the statistics of extracted features from hand-collected backgrounds to real world data.
arXiv Detail & Related papers (2020-10-02T21:05:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.