Accelerating Deep Unrolling Networks via Dimensionality Reduction
- URL: http://arxiv.org/abs/2208.14784v1
- Date: Wed, 31 Aug 2022 11:45:21 GMT
- Title: Accelerating Deep Unrolling Networks via Dimensionality Reduction
- Authors: Junqi Tang, Subhadip Mukherjee, Carola-Bibiane Sch\"onlieb
- Abstract summary: Deep unrolling networks are currently the state-of-the-art solutions for imaging inverse problems.
For high-dimensional imaging tasks, such as X-ray CT and MRI imaging, the deep unrolling schemes typically become inefficient.
We propose a new paradigm for designing efficient deep unrolling networks using dimensionality reduction schemes.
- Score: 5.73658856166614
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we propose a new paradigm for designing efficient deep unrolling
networks using dimensionality reduction schemes, including minibatch gradient
approximation and operator sketching. The deep unrolling networks are currently
the state-of-the-art solutions for imaging inverse problems. However, for
high-dimensional imaging tasks, especially X-ray CT and MRI imaging, the deep
unrolling schemes typically become inefficient both in terms of memory and
computation, due to the need of computing multiple times the high-dimensional
forward and adjoint operators. Recently researchers have found that such
limitations can be partially addressed by unrolling the stochastic gradient
descent (SGD), inspired by the success of stochastic first-order optimization.
In this work, we explore further this direction and propose first a more
expressive and practical stochastic primal-dual unrolling, based on the
state-of-the-art Learned Primal-Dual (LPD) network, and also a further
acceleration upon stochastic primal-dual unrolling, using sketching techniques
to approximate products in the high-dimensional image space. The operator
sketching can be jointly applied with stochastic unrolling for the best
acceleration and compression performance. Our numerical experiments on X-ray CT
image reconstruction demonstrate the remarkable effectiveness of our
accelerated unrolling schemes.
Related papers
- Generative imaging for radio interferometry with fast uncertainty quantification [4.294714866547824]
Learnable reconstruction methods have shown promise in providing efficient and high quality reconstruction.<n>In this article we explore the use of generative neural networks that enable efficient approximate sampling of the posterior distribution.<n>Our methods provide a significant step toward computationally efficient, scalable, and uncertainty-aware imaging for next-generation radio telescopes.
arXiv Detail & Related papers (2025-07-28T18:52:07Z) - Diffusion Models for Solving Inverse Problems via Posterior Sampling with Piecewise Guidance [52.705112811734566]
A novel diffusion-based framework is introduced for solving inverse problems using a piecewise guidance scheme.<n>The proposed method is problem-agnostic and readily adaptable to a variety of inverse problems.<n>The framework achieves a reduction in inference time of (25%) for inpainting with both random and center masks, and (23%) and (24%) for (4times) and (8times) super-resolution tasks.
arXiv Detail & Related papers (2025-07-22T19:35:14Z) - Compressive Imaging Reconstruction via Tensor Decomposed Multi-Resolution Grid Encoding [50.54887630778593]
Compressive imaging (CI) reconstruction aims to recover high-dimensional images from low-dimensional measurements compressed.<n>Existing unsupervised representations may struggle to achieve a desired balance between representation ability and efficiency.<n>We propose Decomposed multi-resolution Grid encoding (GridTD), an unsupervised continuous representation framework for CI reconstruction.
arXiv Detail & Related papers (2025-07-10T12:36:20Z) - Sketched Equivariant Imaging Regularization and Deep Internal Learning for Inverse Problems [4.287621751502392]
Equivariant Imaging (EI) regularization has become the de-facto technique for unsupervised training of deep imaging networks.
We propose a sketched EI regularization which leverages the randomized sketching techniques for acceleration.
We then extend our sketched EI regularization to develop an accelerated deep internal learning framework.
arXiv Detail & Related papers (2024-11-08T18:33:03Z) - Learning Efficient and Effective Trajectories for Differential Equation-based Image Restoration [59.744840744491945]
We reformulate the trajectory optimization of this kind of method, focusing on enhancing both reconstruction quality and efficiency.
We propose cost-aware trajectory distillation to streamline complex paths into several manageable steps with adaptable sizes.
Experiments showcase the significant superiority of the proposed method, achieving a maximum PSNR improvement of 2.1 dB over state-of-the-art methods.
arXiv Detail & Related papers (2024-10-07T07:46:08Z) - Inter-slice Super-resolution of Magnetic Resonance Images by Pre-training and Self-supervised Fine-tuning [49.197385954021456]
In clinical practice, 2D magnetic resonance (MR) sequences are widely adopted. While individual 2D slices can be stacked to form a 3D volume, the relatively large slice spacing can pose challenges for visualization and subsequent analysis tasks.
To reduce slice spacing, deep-learning-based super-resolution techniques are widely investigated.
Most current solutions require a substantial number of paired high-resolution and low-resolution images for supervised training, which are typically unavailable in real-world scenarios.
arXiv Detail & Related papers (2024-06-10T02:20:26Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Loop Unrolled Shallow Equilibrium Regularizer (LUSER) -- A
Memory-Efficient Inverse Problem Solver [26.87738024952936]
In inverse problems we aim to reconstruct some underlying signal of interest from potentially corrupted and often ill-posed measurements.
We propose an LU algorithm with shallow equilibrium regularizers (L)
These implicit models are as expressive as deeper convolutional networks, but far more memory efficient during training.
arXiv Detail & Related papers (2022-10-10T19:50:37Z) - Fast Auto-Differentiable Digitally Reconstructed Radiographs for Solving
Inverse Problems in Intraoperative Imaging [2.6027967363792865]
Digitally reconstructed radiographs (DRRs) are well-studied in preoperative settings.
DRRs can be used to solve inverse problems such as slice-to-volume registration and 3D reconstruction.
arXiv Detail & Related papers (2022-08-26T15:49:28Z) - GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction [50.248694764703714]
Unrolled neural networks have recently achieved state-of-the-art accelerated MRI reconstruction.
These networks unroll iterative optimization algorithms by alternating between physics-based consistency and neural-network based regularization.
We propose Greedy LEarning for Accelerated MRI reconstruction, an efficient training strategy for high-dimensional imaging settings.
arXiv Detail & Related papers (2022-07-18T06:01:29Z) - Operator Sketching for Deep Unrolling Networks [5.025654873456756]
We propose a new paradigm for designing efficient deep unrolling networks using operator sketching.
Our numerical experiments on X-ray CT image reconstruction demonstrate the effectiveness of sketched unrolling schemes.
arXiv Detail & Related papers (2022-03-21T17:34:18Z) - Accelerating Plug-and-Play Image Reconstruction via Multi-Stage Sketched
Gradients [5.025654873456756]
We propose a new paradigm for designing fast plug-and-play (lunch) algorithms using dimensionality reduction techniques.
Unlike existing approaches which utilize gradient iterations for acceleration, we propose novel multi-stage sketched gradient iterations.
arXiv Detail & Related papers (2022-03-14T17:12:09Z) - Homography Decomposition Networks for Planar Object Tracking [11.558401177707312]
Planar object tracking plays an important role in AI applications, such as robotics, visual servoing, and visual SLAM.
We propose a novel Homography Decomposition Networks(HDN) approach that drastically reduces and stabilizes the condition number by decomposing the homography transformation into two groups.
arXiv Detail & Related papers (2021-12-15T06:13:32Z) - SHINE: SHaring the INverse Estimate from the forward pass for bi-level
optimization and implicit models [15.541264326378366]
In recent years, implicit deep learning has emerged as a method to increase the depth of deep neural networks.
The training is performed as a bi-level problem, and its computational complexity is partially driven by the iterative inversion of a huge Jacobian matrix.
We propose a novel strategy to tackle this computational bottleneck from which many bi-level problems suffer.
arXiv Detail & Related papers (2021-06-01T15:07:34Z) - Riggable 3D Face Reconstruction via In-Network Optimization [58.016067611038046]
This paper presents a method for riggable 3D face reconstruction from monocular images.
It jointly estimates a personalized face rig and per-image parameters including expressions, poses, and illuminations.
Experiments demonstrate that our method achieves SOTA reconstruction accuracy, reasonable robustness and generalization ability.
arXiv Detail & Related papers (2021-04-08T03:53:20Z) - A Deep-Unfolded Reference-Based RPCA Network For Video
Foreground-Background Separation [86.35434065681925]
This paper proposes a new deep-unfolding-based network design for the problem of Robust Principal Component Analysis (RPCA)
Unlike existing designs, our approach focuses on modeling the temporal correlation between the sparse representations of consecutive video frames.
Experimentation using the moving MNIST dataset shows that the proposed network outperforms a recently proposed state-of-the-art RPCA network in the task of video foreground-background separation.
arXiv Detail & Related papers (2020-10-02T11:40:09Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z) - The Power of Triply Complementary Priors for Image Compressive Sensing [89.14144796591685]
We propose a joint low-rank deep (LRD) image model, which contains a pair of complementaryly trip priors.
We then propose a novel hybrid plug-and-play framework based on the LRD model for image CS.
To make the optimization tractable, a simple yet effective algorithm is proposed to solve the proposed H-based image CS problem.
arXiv Detail & Related papers (2020-05-16T08:17:44Z) - Video Face Super-Resolution with Motion-Adaptive Feedback Cell [90.73821618795512]
Video super-resolution (VSR) methods have recently achieved a remarkable success due to the development of deep convolutional neural networks (CNN)
In this paper, we propose a Motion-Adaptive Feedback Cell (MAFC), a simple but effective block, which can efficiently capture the motion compensation and feed it back to the network in an adaptive way.
arXiv Detail & Related papers (2020-02-15T13:14:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.