GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction
- URL: http://arxiv.org/abs/2207.08393v1
- Date: Mon, 18 Jul 2022 06:01:29 GMT
- Title: GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction
- Authors: Batu Ozturkler, Arda Sahiner, Tolga Ergen, Arjun D Desai, Christopher
M Sandino, Shreyas Vasanawala, John M Pauly, Morteza Mardani, Mert Pilanci
- Abstract summary: Unrolled neural networks have recently achieved state-of-the-art accelerated MRI reconstruction.
These networks unroll iterative optimization algorithms by alternating between physics-based consistency and neural-network based regularization.
We propose Greedy LEarning for Accelerated MRI reconstruction, an efficient training strategy for high-dimensional imaging settings.
- Score: 50.248694764703714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unrolled neural networks have recently achieved state-of-the-art accelerated
MRI reconstruction. These networks unroll iterative optimization algorithms by
alternating between physics-based consistency and neural-network based
regularization. However, they require several iterations of a large neural
network to handle high-dimensional imaging tasks such as 3D MRI. This limits
traditional training algorithms based on backpropagation due to prohibitively
large memory and compute requirements for calculating gradients and storing
intermediate activations. To address this challenge, we propose Greedy LEarning
for Accelerated MRI (GLEAM) reconstruction, an efficient training strategy for
high-dimensional imaging settings. GLEAM splits the end-to-end network into
decoupled network modules. Each module is optimized in a greedy manner with
decoupled gradient updates, reducing the memory footprint during training. We
show that the decoupled gradient updates can be performed in parallel on
multiple graphical processing units (GPUs) to further reduce training time. We
present experiments with 2D and 3D datasets including multi-coil knee, brain,
and dynamic cardiac cine MRI. We observe that: i) GLEAM generalizes as well as
state-of-the-art memory-efficient baselines such as gradient checkpointing and
invertible networks with the same memory footprint, but with 1.3x faster
training; ii) for the same memory footprint, GLEAM yields 1.1dB PSNR gain in 2D
and 1.8 dB in 3D over end-to-end baselines.
Related papers
- StoDIP: Efficient 3D MRF image reconstruction with deep image priors and stochastic iterations [3.4453266252081645]
We introduce StoDIP, a new algorithm that extends the ground-truth-free Deep Image Prior (DIP) reconstruction to 3D MRF imaging.
tested on a dataset of whole-brain scans from healthy volunteers, StoDIP demonstrated superior performance over the ground-truth-free reconstruction baselines, both quantitatively and qualitatively.
arXiv Detail & Related papers (2024-08-05T10:32:06Z) - SeMLaPS: Real-time Semantic Mapping with Latent Prior Networks and
Quasi-Planar Segmentation [53.83313235792596]
We present a new methodology for real-time semantic mapping from RGB-D sequences.
It combines a 2D neural network and a 3D network based on a SLAM system with 3D occupancy mapping.
Our system achieves state-of-the-art semantic mapping quality within 2D-3D networks-based systems.
arXiv Detail & Related papers (2023-06-28T22:36:44Z) - MF-NeRF: Memory Efficient NeRF with Mixed-Feature Hash Table [62.164549651134465]
We propose MF-NeRF, a memory-efficient NeRF framework that employs a Mixed-Feature hash table to improve memory efficiency and reduce training time while maintaining reconstruction quality.
Our experiments with state-of-the-art Instant-NGP, TensoRF, and DVGO, indicate our MF-NeRF could achieve the fastest training time on the same GPU hardware with similar or even higher reconstruction quality.
arXiv Detail & Related papers (2023-04-25T05:44:50Z) - Memory-efficient Segmentation of High-resolution Volumetric MicroCT
Images [11.723370840090453]
We propose a memory-efficient network architecture for 3D high-resolution image segmentation.
The network incorporates both global and local features via a two-stage U-net-based cascaded framework.
Experiments show that it outperforms state-of-the-art 3D segmentation methods in terms of both segmentation accuracy and memory efficiency.
arXiv Detail & Related papers (2022-05-31T16:42:48Z) - Learned Cone-Beam CT Reconstruction Using Neural Ordinary Differential
Equations [8.621792868567018]
Learned iterative reconstruction algorithms for inverse problems offer the flexibility to combine analytical knowledge about the problem with modules learned from data.
In computed tomography, extending such approaches from 2D fan-beam to 3D cone-beam data is challenging due to the prohibitively high GPU memory.
This paper proposes to use neural ordinary differential equations to solve the reconstruction problem in a residual formulation via numerical integration.
arXiv Detail & Related papers (2022-01-19T12:32:38Z) - A New Backbone for Hyperspectral Image Reconstruction [90.48427561874402]
3D hyperspectral image (HSI) reconstruction refers to inverse process of snapshot compressive imaging.
Proposal is for a Spatial/Spectral Invariant Residual U-Net, namely SSI-ResU-Net.
We show that SSI-ResU-Net achieves competing performance with over 77.3% reduction in terms of floating-point operations.
arXiv Detail & Related papers (2021-08-17T16:20:51Z) - Invertible Residual Network with Regularization for Effective Medical
Image Segmentation [2.76240219662896]
Invertible neural networks have been applied to significantly reduce activation memory footprint when training neural networks with backpropagation.
We propose two versions of the invertible Residual Network, namely Partially Invertible Residual Network (Partially-InvRes) and Fully Invertible Residual Network (Fully-InvRes)
Our results indicate that by using partially/fully invertible networks as the central workhorse in volumetric segmentation, we not only reduce memory overhead but also achieve compatible segmentation performance compared against the non-invertible 3D Unet.
arXiv Detail & Related papers (2021-03-16T13:19:59Z) - Memory-efficient Learning for High-Dimensional MRI Reconstruction [20.81538631727325]
We show improved image reconstruction performance for in-vivo 3D MRI and 2D+time cardiac cine MRI using memory-efficient learning framework (MEL)
MEL uses far less GPU memory while marginally increasing the training time, which enables new applications of DL to high-dimensional MRI.
arXiv Detail & Related papers (2021-03-06T01:36:25Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.