Stable and memory-efficient image recovery using monotone operator
learning (MOL)
- URL: http://arxiv.org/abs/2206.04797v1
- Date: Mon, 6 Jun 2022 21:56:11 GMT
- Title: Stable and memory-efficient image recovery using monotone operator
learning (MOL)
- Authors: Aniket Pramanik, Mathews Jacob
- Abstract summary: We introduce a monotone deep equilibrium learning framework for large-scale inverse problems in imaging.
The proposed algorithm relies on forward-backward splitting, where each iteration consists of a gradient descent involving the score function and a conjugate gradient algorithm to encourage data consistency.
Experiments show that the proposed scheme can offer improved performance in 3D settings while being stable in the presence of input perturbations.
- Score: 24.975981795360845
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a monotone deep equilibrium learning framework for large-scale
inverse problems in imaging. The proposed algorithm relies on forward-backward
splitting, where each iteration consists of a gradient descent involving the
score function and a conjugate gradient algorithm to encourage data
consistency. The score function is modeled as a monotone convolutional neural
network. The use of a monotone operator offers several benefits, including
guaranteed convergence, uniqueness of fixed point, and robustness to input
perturbations, similar to the use of convex priors in compressive sensing. In
addition, the proposed formulation is significantly more memory-efficient than
unrolled methods, which allows us to apply it to 3D problems that current
unrolled algorithms cannot handle. Experiments show that the proposed scheme
can offer improved performance in 3D settings while being stable in the
presence of input perturbations.
Related papers
- Variable Substitution and Bilinear Programming for Aligning Partially Overlapping Point Sets [48.1015832267945]
This research presents a method to meet requirements through the minimization objective function of the RPM algorithm.
A branch-and-bound (BnB) algorithm is devised, which solely branches over the parameters, thereby boosting convergence rate.
Empirical evaluations demonstrate better robustness of the proposed methodology against non-rigid deformation, positional noise, and outliers, when compared with prevailing state-of-the-art transformations.
arXiv Detail & Related papers (2024-05-14T13:28:57Z) - Diff-Reg v1: Diffusion Matching Model for Registration Problem [34.57825794576445]
Existing methods commonly leverage geometric or semantic point features to generate potential correspondences.
Previous methods, which rely on single-pass prediction, may struggle with local minima in complex scenarios.
We introduce a diffusion matching model for robust correspondence estimation.
arXiv Detail & Related papers (2024-03-29T02:10:38Z) - Fully Differentiable Correlation-driven 2D/3D Registration for X-ray to CT Image Fusion [3.868072865207522]
Image-based rigid 2D/3D registration is a critical technique for fluoroscopic guided surgical interventions.
We propose a novel fully differentiable correlation-driven network using a dual-branch CNN-transformer encoder.
A correlation-driven loss is proposed for low-frequency feature and high-frequency feature decomposition based on embedded information.
arXiv Detail & Related papers (2024-02-04T14:12:51Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Input-gradient space particle inference for neural network ensembles [32.64178604645513]
First-order Repulsive Deep Ensemble (FoRDE) is an ensemble learning method based on ParVI.
Experiments on image classification datasets and transfer learning tasks show that FoRDE significantly outperforms the gold-standard DEs.
arXiv Detail & Related papers (2023-06-05T11:00:11Z) - Learning Iterative Robust Transformation Synchronization [71.73273007900717]
We propose to use graph neural networks (GNNs) to learn transformation synchronization.
In this work, we avoid handcrafting robust loss functions, and propose to use graph neural networks (GNNs) to learn transformation synchronization.
arXiv Detail & Related papers (2021-11-01T07:03:14Z) - Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets [85.71360365315128]
In many applications, we need algorithms which can align partially overlapping point sets are invariant to the corresponding corresponding RPM algorithm.
We first show that the objective is a cubic bound function. We then utilize the convex envelopes of trilinear and bilinear monomial transformations to derive its lower bound.
We next develop a branch-and-bound (BnB) algorithm which only branches over the transformation variables and runs efficiently.
arXiv Detail & Related papers (2021-01-19T04:24:23Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z) - Dualize, Split, Randomize: Toward Fast Nonsmooth Optimization Algorithms [21.904012114713428]
We consider the sum of three convex functions, where the first one F is smooth, the second one is nonsmooth and proximable.
This template problem has many applications, for instance, in image processing and machine learning.
We propose a new primal-dual algorithm, which we call PDDY, for this problem.
arXiv Detail & Related papers (2020-04-03T10:48:01Z) - Variance Reduction with Sparse Gradients [82.41780420431205]
Variance reduction methods such as SVRG and SpiderBoost use a mixture of large and small batch gradients.
We introduce a new sparsity operator: The random-top-k operator.
Our algorithm consistently outperforms SpiderBoost on various tasks including image classification, natural language processing, and sparse matrix factorization.
arXiv Detail & Related papers (2020-01-27T08:23:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.