MEPNet: A Model-Driven Equivariant Proximal Network for Joint
Sparse-View Reconstruction and Metal Artifact Reduction in CT Images
- URL: http://arxiv.org/abs/2306.14274v1
- Date: Sun, 25 Jun 2023 15:50:11 GMT
- Title: MEPNet: A Model-Driven Equivariant Proximal Network for Joint
Sparse-View Reconstruction and Metal Artifact Reduction in CT Images
- Authors: Hong Wang, Minghao Zhou, Dong Wei, Yuexiang Li, Yefeng Zheng
- Abstract summary: We propose a model-driven equivariant proximal network, called MEPNet.
MEPNet is optimization-inspired and has a clear working mechanism.
We will release the code at urlhttps://github.com/hongwang01/MEPNet.
- Score: 29.458632068296854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparse-view computed tomography (CT) has been adopted as an important
technique for speeding up data acquisition and decreasing radiation dose.
However, due to the lack of sufficient projection data, the reconstructed CT
images often present severe artifacts, which will be further amplified when
patients carry metallic implants. For this joint sparse-view reconstruction and
metal artifact reduction task, most of the existing methods are generally
confronted with two main limitations: 1) They are almost built based on common
network modules without fully embedding the physical imaging geometry
constraint of this specific task into the dual-domain learning; 2) Some
important prior knowledge is not deeply explored and sufficiently utilized.
Against these issues, we specifically construct a dual-domain reconstruction
model and propose a model-driven equivariant proximal network, called MEPNet.
The main characteristics of MEPNet are: 1) It is optimization-inspired and has
a clear working mechanism; 2) The involved proximal operator is modeled via a
rotation equivariant convolutional neural network, which finely represents the
inherent rotational prior underlying the CT scanning that the same organ can be
imaged at different angles. Extensive experiments conducted on several datasets
comprehensively substantiate that compared with the conventional
convolution-based proximal network, such a rotation equivariance mechanism
enables our proposed method to achieve better reconstruction performance with
fewer network parameters. We will release the code at
\url{https://github.com/hongwang01/MEPNet}.
Related papers
- MVMS-RCN: A Dual-Domain Unfolding CT Reconstruction with Multi-sparse-view and Multi-scale Refinement-correction [9.54126979075279]
Sparse-view CT imaging reduces the number of projection views to a lower radiation dose.
Most existing deep learning (DL) and deep unfolding sparse-view CT reconstruction methods do not fully use the projection data.
This paper aims to use mathematical ideas and design optimal DL imaging algorithms for sparse-view tomography reconstructions.
arXiv Detail & Related papers (2024-05-27T13:01:25Z) - Rotation Equivariant Proximal Operator for Deep Unfolding Methods in Image Restoration [62.41329042683779]
We propose a high-accuracy rotation equivariant proximal network that embeds rotation symmetry priors into the deep unfolding framework.
This study makes efforts to suggest a high-accuracy rotation equivariant proximal network that effectively embeds rotation symmetry priors into the deep unfolding framework.
arXiv Detail & Related papers (2023-12-25T11:53:06Z) - SeUNet-Trans: A Simple yet Effective UNet-Transformer Model for Medical
Image Segmentation [0.0]
We propose a simple yet effective UNet-Transformer (seUNet-Trans) model for medical image segmentation.
In our approach, the UNet model is designed as a feature extractor to generate multiple feature maps from the input images.
By leveraging the UNet architecture and the self-attention mechanism, our model not only retains the preservation of both local and global context information but also is capable of capturing long-range dependencies between input elements.
arXiv Detail & Related papers (2023-10-16T01:13:38Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - Orientation-Shared Convolution Representation for CT Metal Artifact
Learning [63.67718355820655]
During X-ray computed tomography (CT) scanning, metallic implants carrying with patients often lead to adverse artifacts.
Existing deep-learning-based methods have gained promising reconstruction performance.
We propose an orientation-shared convolution representation strategy to adapt the physical prior structures of artifacts.
arXiv Detail & Related papers (2022-12-26T13:56:12Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Spatiotemporal Feature Learning Based on Two-Step LSTM and Transformer
for CT Scans [2.3682456328966115]
We propose a novel, effective, two-step-wise approach to tickle this issue for COVID-19 symptom classification thoroughly.
First, the semantic feature embedding of each slice for a CT scan is extracted by conventional backbone networks.
Then, we proposed a long short-term memory (LSTM) and Transformer-based sub-network to deal with temporal feature learning.
arXiv Detail & Related papers (2022-07-04T16:59:05Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - DAN-Net: Dual-Domain Adaptive-Scaling Non-local Network for CT Metal
Artifact Reduction [15.225899631788973]
Metal implants can heavily attenuate X-rays in computed tomography (CT) scans, leading to severe artifacts in reconstructed images.
Several network models have been proposed for metal artifact reduction (MAR) in CT.
We present a novel Dual-domain Adaptive-scaling Non-local network (DAN-Net) for MAR.
arXiv Detail & Related papers (2021-02-16T08:09:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.