AliasNet: Alias Artefact Suppression Network for Accelerated
Phase-Encode MRI
- URL: http://arxiv.org/abs/2302.08861v2
- Date: Tue, 10 Oct 2023 07:38:02 GMT
- Title: AliasNet: Alias Artefact Suppression Network for Accelerated
Phase-Encode MRI
- Authors: Marlon E. Bran Lorenzana, Shekhar S. Chandra and Feng Liu
- Abstract summary: Sparse reconstruction is an important aspect of MRI, helping to reduce acquisition time and improve spatial-temporal resolution.
Experiments conducted on retrospectively under-sampled brain and knee data demonstrate that combination of the proposed 1D AliasNet modules with existing 2D deep learned (DL) recovery techniques leads to an improvement in image quality.
- Score: 4.752084030395196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparse reconstruction is an important aspect of MRI, helping to reduce
acquisition time and improve spatial-temporal resolution. Popular methods are
based mostly on compressed sensing (CS), which relies on the random sampling of
k-space to produce incoherent (noise-like) artefacts. Due to hardware
constraints, 1D Cartesian phase-encode under-sampling schemes are popular for
2D CS-MRI. However, 1D under-sampling limits 2D incoherence between
measurements, yielding structured aliasing artefacts (ghosts) that may be
difficult to remove assuming a 2D sparsity model. Reconstruction algorithms
typically deploy direction-insensitive 2D regularisation for these
direction-associated artefacts. Recognising that phase-encode artefacts can be
separated into contiguous 1D signals, we develop two decoupling techniques that
enable explicit 1D regularisation and leverage the excellent 1D incoherence
characteristics. We also derive a combined 1D + 2D reconstruction technique
that takes advantage of spatial relationships within the image. Experiments
conducted on retrospectively under-sampled brain and knee data demonstrate that
combination of the proposed 1D AliasNet modules with existing 2D deep learned
(DL) recovery techniques leads to an improvement in image quality. We also find
AliasNet enables a superior scaling of performance compared to increasing the
size of the original 2D network layers. AliasNet therefore improves the
regularisation of aliasing artefacts arising from phase-encode under-sampling,
by tailoring the network architecture to account for their expected appearance.
The proposed 1D + 2D approach is compatible with any existing 2D DL recovery
technique deployed for this application.
Related papers
- GEAL: Generalizable 3D Affordance Learning with Cross-Modal Consistency [50.11520458252128]
Existing 3D affordance learning methods struggle with generalization and robustness due to limited annotated data.
We propose GEAL, a novel framework designed to enhance the generalization and robustness of 3D affordance learning by leveraging large-scale pre-trained 2D models.
GEAL consistently outperforms existing methods across seen and novel object categories, as well as corrupted data.
arXiv Detail & Related papers (2024-12-12T17:59:03Z) - DSplats: 3D Generation by Denoising Splats-Based Multiview Diffusion Models [67.50989119438508]
We introduce DSplats, a novel method that directly denoises multiview images using Gaussian-based Reconstructors to produce realistic 3D assets.
Our experiments demonstrate that DSplats not only produces high-quality, spatially consistent outputs, but also sets a new standard in single-image to 3D reconstruction.
arXiv Detail & Related papers (2024-12-11T07:32:17Z) - Coarse-Fine Spectral-Aware Deformable Convolution For Hyperspectral Image Reconstruction [15.537910100051866]
We study the inverse problem of Coded Aperture Snapshot Spectral Imaging (CASSI)
We propose Coarse-Fine Spectral-Aware Deformable Convolution Network (CFSDCN)
Our CFSDCN significantly outperforms previous state-of-the-art (SOTA) methods on both simulated and real HSI datasets.
arXiv Detail & Related papers (2024-06-18T15:15:12Z) - Scalable Non-Cartesian Magnetic Resonance Imaging with R2D2 [6.728969294264806]
We propose a new approach for non-esian magnetic resonance image reconstruction.
We leverage the "Residual to-Residual DNN series for high range imaging (R2D2)"
arXiv Detail & Related papers (2024-03-26T17:45:06Z) - The R2D2 deep neural network series paradigm for fast precision imaging in radio astronomy [1.7249361224827533]
Recent image reconstruction techniques have remarkable capability for imaging precision, well beyond CLEAN's capability.
We introduce a novel deep learning approach, dubbed "Residual-to-Residual DNN series for high-Dynamic range imaging"
R2D2's capability to deliver high precision is demonstrated in simulation, across a variety image observation settings using the Very Large Array (VLA)
arXiv Detail & Related papers (2024-03-08T16:57:54Z) - StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z) - JOTR: 3D Joint Contrastive Learning with Transformers for Occluded Human
Mesh Recovery [84.67823511418334]
This paper presents 3D JOint contrastive learning with TRansformers framework for handling occluded 3D human mesh recovery.
Our method includes an encoder-decoder transformer architecture to fuse 2D and 3D representations for achieving 2D$&$3D aligned results.
arXiv Detail & Related papers (2023-07-31T02:58:58Z) - Deep-MDS Framework for Recovering the 3D Shape of 2D Landmarks from a
Single Image [8.368476827165114]
This paper proposes a framework to recover the 3D shape of 2D landmarks on a human face, in a single input image.
A deep neural network learns the pairwise dissimilarity among 2D landmarks, used by NMDS approach.
arXiv Detail & Related papers (2022-10-27T06:20:10Z) - Degradation-Aware Unfolding Half-Shuffle Transformer for Spectral
Compressive Imaging [142.11622043078867]
We propose a principled Degradation-Aware Unfolding Framework (DAUF) that estimates parameters from the compressed image and physical mask, and then uses these parameters to control each iteration.
By plugging HST into DAUF, we establish the first Transformer-based deep unfolding method, Degradation-Aware Unfolding Half-Shuffle Transformer (DAUHST) for HSI reconstruction.
arXiv Detail & Related papers (2022-05-20T11:37:44Z) - Multi-initialization Optimization Network for Accurate 3D Human Pose and
Shape Estimation [75.44912541912252]
We propose a three-stage framework named Multi-Initialization Optimization Network (MION)
In the first stage, we strategically select different coarse 3D reconstruction candidates which are compatible with the 2D keypoints of input sample.
In the second stage, we design a mesh refinement transformer (MRT) to respectively refine each coarse reconstruction result via a self-attention mechanism.
Finally, a Consistency Estimation Network (CEN) is proposed to find the best result from mutiple candidates by evaluating if the visual evidence in RGB image matches a given 3D reconstruction.
arXiv Detail & Related papers (2021-12-24T02:43:58Z) - One-dimensional Deep Low-rank and Sparse Network for Accelerated MRI [19.942978606567547]
Deep learning has shown astonishing performance in accelerated magnetic resonance imaging (MRI)
Most state-of-the-art deep learning reconstructions adopt the powerful convolutional neural network and perform 2D convolution.
We present a new approach that explores the 1D convolution, making the deep network much easier to be trained and generalized.
arXiv Detail & Related papers (2021-12-09T06:39:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.