AliasNet: Alias Artefact Suppression Network for Accelerated
Phase-Encode MRI
- URL: http://arxiv.org/abs/2302.08861v2
- Date: Tue, 10 Oct 2023 07:38:02 GMT
- Title: AliasNet: Alias Artefact Suppression Network for Accelerated
Phase-Encode MRI
- Authors: Marlon E. Bran Lorenzana, Shekhar S. Chandra and Feng Liu
- Abstract summary: Sparse reconstruction is an important aspect of MRI, helping to reduce acquisition time and improve spatial-temporal resolution.
Experiments conducted on retrospectively under-sampled brain and knee data demonstrate that combination of the proposed 1D AliasNet modules with existing 2D deep learned (DL) recovery techniques leads to an improvement in image quality.
- Score: 4.752084030395196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparse reconstruction is an important aspect of MRI, helping to reduce
acquisition time and improve spatial-temporal resolution. Popular methods are
based mostly on compressed sensing (CS), which relies on the random sampling of
k-space to produce incoherent (noise-like) artefacts. Due to hardware
constraints, 1D Cartesian phase-encode under-sampling schemes are popular for
2D CS-MRI. However, 1D under-sampling limits 2D incoherence between
measurements, yielding structured aliasing artefacts (ghosts) that may be
difficult to remove assuming a 2D sparsity model. Reconstruction algorithms
typically deploy direction-insensitive 2D regularisation for these
direction-associated artefacts. Recognising that phase-encode artefacts can be
separated into contiguous 1D signals, we develop two decoupling techniques that
enable explicit 1D regularisation and leverage the excellent 1D incoherence
characteristics. We also derive a combined 1D + 2D reconstruction technique
that takes advantage of spatial relationships within the image. Experiments
conducted on retrospectively under-sampled brain and knee data demonstrate that
combination of the proposed 1D AliasNet modules with existing 2D deep learned
(DL) recovery techniques leads to an improvement in image quality. We also find
AliasNet enables a superior scaling of performance compared to increasing the
size of the original 2D network layers. AliasNet therefore improves the
regularisation of aliasing artefacts arising from phase-encode under-sampling,
by tailoring the network architecture to account for their expected appearance.
The proposed 1D + 2D approach is compatible with any existing 2D DL recovery
technique deployed for this application.
Related papers
- Coarse-Fine Spectral-Aware Deformable Convolution For Hyperspectral Image Reconstruction [15.537910100051866]
We study the inverse problem of Coded Aperture Snapshot Spectral Imaging (CASSI)
We propose Coarse-Fine Spectral-Aware Deformable Convolution Network (CFSDCN)
Our CFSDCN significantly outperforms previous state-of-the-art (SOTA) methods on both simulated and real HSI datasets.
arXiv Detail & Related papers (2024-06-18T15:15:12Z) - Scalable Non-Cartesian Magnetic Resonance Imaging with R2D2 [6.728969294264806]
We propose a new approach for non-esian magnetic resonance image reconstruction.
We leverage the "Residual to-Residual DNN series for high range imaging (R2D2)"
arXiv Detail & Related papers (2024-03-26T17:45:06Z) - The R2D2 deep neural network series paradigm for fast precision imaging in radio astronomy [1.7249361224827533]
Recent image reconstruction techniques have remarkable capability for imaging precision, well beyond CLEAN's capability.
We introduce a novel deep learning approach, dubbed "Residual-to-Residual DNN series for high-Dynamic range imaging"
R2D2's capability to deliver high precision is demonstrated in simulation, across a variety image observation settings using the Very Large Array (VLA)
arXiv Detail & Related papers (2024-03-08T16:57:54Z) - StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z) - JOTR: 3D Joint Contrastive Learning with Transformers for Occluded Human
Mesh Recovery [84.67823511418334]
This paper presents 3D JOint contrastive learning with TRansformers framework for handling occluded 3D human mesh recovery.
Our method includes an encoder-decoder transformer architecture to fuse 2D and 3D representations for achieving 2D$&$3D aligned results.
arXiv Detail & Related papers (2023-07-31T02:58:58Z) - NeRF-GAN Distillation for Efficient 3D-Aware Generation with
Convolutions [97.27105725738016]
integration of Neural Radiance Fields (NeRFs) and generative models, such as Generative Adversarial Networks (GANs) has transformed 3D-aware generation from single-view images.
We propose a simple and effective method, based on re-using the well-disentangled latent space of a pre-trained NeRF-GAN in a pose-conditioned convolutional network to directly generate 3D-consistent images corresponding to the underlying 3D representations.
arXiv Detail & Related papers (2023-03-22T18:59:48Z) - Joint-MAE: 2D-3D Joint Masked Autoencoders for 3D Point Cloud
Pre-training [65.75399500494343]
Masked Autoencoders (MAE) have shown promising performance in self-supervised learning for 2D and 3D computer vision.
We propose Joint-MAE, a 2D-3D joint MAE framework for self-supervised 3D point cloud pre-training.
arXiv Detail & Related papers (2023-02-27T17:56:18Z) - Deep-MDS Framework for Recovering the 3D Shape of 2D Landmarks from a
Single Image [8.368476827165114]
This paper proposes a framework to recover the 3D shape of 2D landmarks on a human face, in a single input image.
A deep neural network learns the pairwise dissimilarity among 2D landmarks, used by NMDS approach.
arXiv Detail & Related papers (2022-10-27T06:20:10Z) - Degradation-Aware Unfolding Half-Shuffle Transformer for Spectral
Compressive Imaging [142.11622043078867]
We propose a principled Degradation-Aware Unfolding Framework (DAUF) that estimates parameters from the compressed image and physical mask, and then uses these parameters to control each iteration.
By plugging HST into DAUF, we establish the first Transformer-based deep unfolding method, Degradation-Aware Unfolding Half-Shuffle Transformer (DAUHST) for HSI reconstruction.
arXiv Detail & Related papers (2022-05-20T11:37:44Z) - Multi-initialization Optimization Network for Accurate 3D Human Pose and
Shape Estimation [75.44912541912252]
We propose a three-stage framework named Multi-Initialization Optimization Network (MION)
In the first stage, we strategically select different coarse 3D reconstruction candidates which are compatible with the 2D keypoints of input sample.
In the second stage, we design a mesh refinement transformer (MRT) to respectively refine each coarse reconstruction result via a self-attention mechanism.
Finally, a Consistency Estimation Network (CEN) is proposed to find the best result from mutiple candidates by evaluating if the visual evidence in RGB image matches a given 3D reconstruction.
arXiv Detail & Related papers (2021-12-24T02:43:58Z) - One-dimensional Deep Low-rank and Sparse Network for Accelerated MRI [19.942978606567547]
Deep learning has shown astonishing performance in accelerated magnetic resonance imaging (MRI)
Most state-of-the-art deep learning reconstructions adopt the powerful convolutional neural network and perform 2D convolution.
We present a new approach that explores the 1D convolution, making the deep network much easier to be trained and generalized.
arXiv Detail & Related papers (2021-12-09T06:39:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.