FreeSeed: Frequency-band-aware and Self-guided Network for Sparse-view
CT Reconstruction
- URL: http://arxiv.org/abs/2307.05890v1
- Date: Wed, 12 Jul 2023 03:39:54 GMT
- Title: FreeSeed: Frequency-band-aware and Self-guided Network for Sparse-view
CT Reconstruction
- Authors: Chenglong Ma, Zilong Li, Junping Zhang, Yi Zhang, Hongming Shan
- Abstract summary: Sparse-view computed tomography (CT) is a promising solution for expediting the scanning process and mitigating radiation exposure to patients.
Recently, deep learning-based image post-processing methods have shown promising results.
We propose a simple yet effective FREquency-band-awarE and SElf-guidED network, termed FreeSeed, which can effectively remove artifact and recover missing detail.
- Score: 34.91517935951518
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Sparse-view computed tomography (CT) is a promising solution for expediting
the scanning process and mitigating radiation exposure to patients, the
reconstructed images, however, contain severe streak artifacts, compromising
subsequent screening and diagnosis. Recently, deep learning-based image
post-processing methods along with their dual-domain counterparts have shown
promising results. However, existing methods usually produce over-smoothed
images with loss of details due to (1) the difficulty in accurately modeling
the artifact patterns in the image domain, and (2) the equal treatment of each
pixel in the loss function. To address these issues, we concentrate on the
image post-processing and propose a simple yet effective FREquency-band-awarE
and SElf-guidED network, termed FreeSeed, which can effectively remove artifact
and recover missing detail from the contaminated sparse-view CT images.
Specifically, we first propose a frequency-band-aware artifact modeling network
(FreeNet), which learns artifact-related frequency-band attention in Fourier
domain for better modeling the globally distributed streak artifact on the
sparse-view CT images. We then introduce a self-guided artifact refinement
network (SeedNet), which leverages the predicted artifact to assist FreeNet in
continuing to refine the severely corrupted details. Extensive experiments
demonstrate the superior performance of FreeSeed and its dual-domain
counterpart over the state-of-the-art sparse-view CT reconstruction methods.
Source code is made available at https://github.com/Masaaki-75/freeseed.
Related papers
- DiffDoctor: Diagnosing Image Diffusion Models Before Treating [57.82359018425674]
We propose DiffDoctor, a two-stage pipeline to assist image diffusion models in generating fewer artifacts.
We collect a dataset of over 1M flawed synthesized images and set up an efficient human-in-the-loop annotation process.
The learned artifact detector is then involved in the second stage to tune the diffusion model through assigning a per-pixel confidence map for each image.
arXiv Detail & Related papers (2025-01-21T18:56:41Z) - Motion Artifact Removal in Pixel-Frequency Domain via Alternate Masks and Diffusion Model [58.694932010573346]
Motion artifacts present in magnetic resonance imaging (MRI) can seriously interfere with clinical diagnosis.
We propose a novel unsupervised purification method which leverages pixel-frequency information of noisy MRI images to guide a pre-trained diffusion model to recover clean MRI images.
arXiv Detail & Related papers (2024-12-10T15:25:18Z) - Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - TD-Net: A Tri-domain network for sparse-view CT reconstruction [16.40734977207315]
TD-Net is a pioneering tri-domain approach that unifies sinogram, image, and frequency domain optimizations.
It adeptly preserves intricate details, overcoming the prevalent over-smoothing issue.
The enhanced capabilities of TD-Net in varied noise scenarios highlight its potential as a breakthrough in medical imaging.
arXiv Detail & Related papers (2023-11-26T17:48:53Z) - Unpaired Optical Coherence Tomography Angiography Image Super-Resolution via Frequency-Aware Inverse-Consistency GAN [6.717440708401628]
We propose a Generative Adversarial Network (GAN)-based unpaired super-resolution method for OCTA images.
To facilitate a precise spectrum of the reconstructed image, we also propose a frequency-aware adversarial loss for the discriminator.
Experiments show that our method outperforms other state-of-the-art unpaired methods both quantitatively and visually.
arXiv Detail & Related papers (2023-09-29T14:19:51Z) - Self-Attention Generative Adversarial Network for Iterative
Reconstruction of CT Images [0.9208007322096533]
The aim of this study is to train a single neural network to reconstruct high-quality CT images from noisy or incomplete data.
The network includes a self-attention block to model long-range dependencies in the data.
Our approach is shown to have comparable overall performance to CIRCLE GAN, while outperforming the other two approaches.
arXiv Detail & Related papers (2021-12-23T19:20:38Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - U-DuDoNet: Unpaired dual-domain network for CT metal artifact reduction [12.158957925558296]
We propose an unpaired dual-domain network (U-DuDoNet) trained using unpaired data.
Unlike the artifact disentanglement network (ADN), our U-DuDoNet directly models the artifact generation process through additions in both sinogram and image domains.
Our design includes a self-learned sinogram prior net, which provides guidance for restoring the information in the sinogram domain.
arXiv Detail & Related papers (2021-03-08T05:19:15Z) - DAN-Net: Dual-Domain Adaptive-Scaling Non-local Network for CT Metal
Artifact Reduction [15.225899631788973]
Metal implants can heavily attenuate X-rays in computed tomography (CT) scans, leading to severe artifacts in reconstructed images.
Several network models have been proposed for metal artifact reduction (MAR) in CT.
We present a novel Dual-domain Adaptive-scaling Non-local network (DAN-Net) for MAR.
arXiv Detail & Related papers (2021-02-16T08:09:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.