Learning MRI Artifact Removal With Unpaired Data
- URL: http://arxiv.org/abs/2110.04604v1
- Date: Sat, 9 Oct 2021 16:09:27 GMT
- Title: Learning MRI Artifact Removal With Unpaired Data
- Authors: Siyuan Liu, Kim-Han Thung, Liangqiong Qu, Weili Lin, Dinggang Shen,
and Pew-Thian Yap
- Abstract summary: Retrospective artifact correction (RAC) improves image quality post acquisition and enhances image usability.
Recent machine learning driven techniques for RAC are predominantly based on supervised learning.
Here we show that unwanted image artifacts can be disentangled and removed from an image via an RAC neural network learned with unpaired data.
- Score: 74.48301038665929
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Retrospective artifact correction (RAC) improves image quality post
acquisition and enhances image usability. Recent machine learning driven
techniques for RAC are predominantly based on supervised learning and therefore
practical utility can be limited as data with paired artifact-free and
artifact-corrupted images are typically insufficient or even non-existent. Here
we show that unwanted image artifacts can be disentangled and removed from an
image via an RAC neural network learned with unpaired data. This implies that
our method does not require matching artifact-corrupted data to be either
collected via acquisition or generated via simulation. Experimental results
demonstrate that our method is remarkably effective in removing artifacts and
retaining anatomical details in images with different contrasts.
Related papers
- Zero-Shot Artifact2Artifact: Self-incentive artifact removal for photoacoustic imaging without any data [18.154498077143195]
ZS-A2A is a zero-shot self-supervised artifact removal method based on a super-lightweight network.
ZS-A2A achieves state-of-the-art (SOTA) performance compared to existing zero-shot methods.
For the $ in vivo $ rat liver, ZS-A2A improves CNR from 17.48 to 43.46 in just 8 seconds.
arXiv Detail & Related papers (2024-12-19T14:11:49Z) - Motion Artifact Removal in Pixel-Frequency Domain via Alternate Masks and Diffusion Model [58.694932010573346]
Motion artifacts present in magnetic resonance imaging (MRI) can seriously interfere with clinical diagnosis.
We propose a novel unsupervised purification method which leverages pixel-frequency information of noisy MRI images to guide a pre-trained diffusion model to recover clean MRI images.
arXiv Detail & Related papers (2024-12-10T15:25:18Z) - Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.
In this paper, we investigate how detection performance varies across model backbones, types, and datasets.
We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - Large-Scale Data-Free Knowledge Distillation for ImageNet via Multi-Resolution Data Generation [53.95204595640208]
Data-Free Knowledge Distillation (DFKD) is an advanced technique that enables knowledge transfer from a teacher model to a student model without relying on original training data.
Previous approaches have generated synthetic images at high resolutions without leveraging information from real images.
MUSE generates images at lower resolutions while using Class Activation Maps (CAMs) to ensure that the generated images retain critical, class-specific features.
arXiv Detail & Related papers (2024-11-26T02:23:31Z) - Realistic Restorer: artifact-free flow restorer(AF2R) for MRI motion
artifact removal [3.8103327351507255]
Motion artifact severely degrades image quality, reduces examination efficiency, and makes accurate diagnosis difficult.
Previous methods often relied on implicit models for artifact correction, resulting in biases in modeling the artifact formation mechanism.
We incorporate the artifact generation mechanism to reestablish the relationship between artifacts and anatomical content in the image domain.
arXiv Detail & Related papers (2023-06-19T04:02:01Z) - Patch-Based Denoising Diffusion Probabilistic Model for Sparse-View CT
Reconstruction [6.907847093036819]
Sparse-view computed tomography (CT) can be used to reduce radiation dose greatly but suffers from severe image artifacts.
Deep learning based method for sparse-view CT reconstruction has attracted a major attention.
We propose a patch-based denoising diffusion probabilistic model (DDPM) for sparse-view CT reconstruction.
arXiv Detail & Related papers (2022-11-18T17:35:36Z) - Artifact Reduction in Fundus Imaging using Cycle Consistent Adversarial
Neural Networks [0.0]
Deep learning is a powerful tool to extract patterns from data without much human intervention.
An attempt has been made to automatically rectify such artifacts present in the images of the fundus.
We use a CycleGAN based model which consists of residual blocks to reduce the artifacts in the images.
arXiv Detail & Related papers (2021-12-25T18:05:48Z) - "One-Shot" Reduction of Additive Artifacts in Medical Images [17.354879155345376]
We introduce One-Shot medical image Artifact Reduction (OSAR), which exploits the power of deep learning but without using pre-trained general networks.
Specifically, we train a light-weight image-specific artifact reduction network using data synthesized from the input image at test-time.
We show that the proposed method can reduce artifacts better than state-of-the-art both qualitatively and quantitatively using shorter test time.
arXiv Detail & Related papers (2021-10-23T18:35:00Z) - Data Consistent CT Reconstruction from Insufficient Data with Learned
Prior Images [70.13735569016752]
We investigate the robustness of deep learning in CT image reconstruction by showing false negative and false positive lesion cases.
We propose a data consistent reconstruction (DCR) method to improve their image quality, which combines the advantages of compressed sensing and deep learning.
The efficacy of the proposed method is demonstrated in cone-beam CT with truncated data, limited-angle data and sparse-view data, respectively.
arXiv Detail & Related papers (2020-05-20T13:30:49Z) - FD-GAN: Generative Adversarial Networks with Fusion-discriminator for
Single Image Dehazing [48.65974971543703]
We propose a fully end-to-end Generative Adversarial Networks with Fusion-discriminator (FD-GAN) for image dehazing.
Our model can generator more natural and realistic dehazed images with less color distortion and fewer artifacts.
Experiments have shown that our method reaches state-of-the-art performance on both public synthetic datasets and real-world images.
arXiv Detail & Related papers (2020-01-20T04:36:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.