Accelerating Plug-and-Play Image Reconstruction via Multi-Stage Sketched
Gradients
- URL: http://arxiv.org/abs/2203.07308v1
- Date: Mon, 14 Mar 2022 17:12:09 GMT
- Title: Accelerating Plug-and-Play Image Reconstruction via Multi-Stage Sketched
Gradients
- Authors: Junqi Tang
- Abstract summary: We propose a new paradigm for designing fast plug-and-play (lunch) algorithms using dimensionality reduction techniques.
Unlike existing approaches which utilize gradient iterations for acceleration, we propose novel multi-stage sketched gradient iterations.
- Score: 5.025654873456756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we propose a new paradigm for designing fast plug-and-play (PnP)
algorithms using dimensionality reduction techniques. Unlike existing
approaches which utilize stochastic gradient iterations for acceleration, we
propose novel multi-stage sketched gradient iterations which first perform
downsampling dimensionality reduction in the image space, and then efficiently
approximate the true gradient using the sketched gradient in the
low-dimensional space. This sketched gradient scheme can also be naturally
combined with PnP-SGD methods for further improvement on computational
complexity. As a generic acceleration scheme, it can be applied to accelerate
any existing PnP/RED algorithm. Our numerical experiments on X-ray fan-beam CT
demonstrate the remarkable effectiveness of our scheme, that a computational
free-lunch can be obtained using this dimensionality reduction in the image
space.
Related papers
- Flattened one-bit stochastic gradient descent: compressed distributed optimization with controlled variance [55.01966743652196]
We propose a novel algorithm for distributed gradient descent (SGD) with compressed gradient communication in the parameter-server framework.
Our gradient compression technique, named flattened one-bit gradient descent (FO-SGD), relies on two simple algorithmic ideas.
arXiv Detail & Related papers (2024-05-17T21:17:27Z) - Optimizing CT Scan Geometries With and Without Gradients [7.788823739816626]
We show that gradient-based optimization algorithms are a possible alternative to gradient-free algorithms.
gradient-based algorithms converge substantially faster while being comparable to gradient-free algorithms in terms of capture range and robustness to the number of free parameters.
arXiv Detail & Related papers (2023-02-13T10:44:41Z) - Accelerating Deep Unrolling Networks via Dimensionality Reduction [5.73658856166614]
Deep unrolling networks are currently the state-of-the-art solutions for imaging inverse problems.
For high-dimensional imaging tasks, such as X-ray CT and MRI imaging, the deep unrolling schemes typically become inefficient.
We propose a new paradigm for designing efficient deep unrolling networks using dimensionality reduction schemes.
arXiv Detail & Related papers (2022-08-31T11:45:21Z) - Orthogonalising gradients to speed up neural network optimisation [0.0]
optimisation of neural networks can be sped up by orthogonalising the gradients before the optimisation step, ensuring the diversification of the learned representations.
We tested this method on ImageNet and CIFAR-10 resulting in a large decrease in learning time, and also obtain a speed-up on the semi-supervised learning BarlowTwins.
arXiv Detail & Related papers (2022-02-14T21:46:07Z) - GraDIRN: Learning Iterative Gradient Descent-based Energy Minimization
for Deformable Image Registration [9.684786294246749]
We present a Gradient Descent-based Image Registration Network (GraDIRN) for learning deformable image registration.
GraDIRN is based on multi-resolution gradient descent energy minimization.
We demonstrate that this approach achieves state-of-the-art registration performance while using fewer learnable parameters.
arXiv Detail & Related papers (2021-12-07T14:48:31Z) - Communication-Efficient Federated Learning via Quantized Compressed
Sensing [82.10695943017907]
The presented framework consists of gradient compression for wireless devices and gradient reconstruction for a parameter server.
Thanks to gradient sparsification and quantization, our strategy can achieve a higher compression ratio than one-bit gradient compression.
We demonstrate that the framework achieves almost identical performance with the case that performs no compression.
arXiv Detail & Related papers (2021-11-30T02:13:54Z) - Cogradient Descent for Dependable Learning [64.02052988844301]
We propose a dependable learning based on Cogradient Descent (CoGD) algorithm to address the bilinear optimization problem.
CoGD is introduced to solve bilinear problems when one variable is with sparsity constraint.
It can also be used to decompose the association of features and weights, which further generalizes our method to better train convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-06-20T04:28:20Z) - DiffPD: Differentiable Projective Dynamics with Contact [65.88720481593118]
We present DiffPD, an efficient differentiable soft-body simulator with implicit time integration.
We evaluate the performance of DiffPD and observe a speedup of 4-19 times compared to the standard Newton's method in various applications.
arXiv Detail & Related papers (2021-01-15T00:13:33Z) - Channel-Directed Gradients for Optimization of Convolutional Neural
Networks [50.34913837546743]
We introduce optimization methods for convolutional neural networks that can be used to improve existing gradient-based optimization in terms of generalization error.
We show that defining the gradients along the output channel direction leads to a performance boost, while other directions can be detrimental.
arXiv Detail & Related papers (2020-08-25T00:44:09Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.