An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees
- URL: http://arxiv.org/abs/2306.06378v1
- Date: Sat, 10 Jun 2023 08:25:16 GMT
- Title: An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees
- Authors: Alexandros Gkillas, Dimitris Ampeliotis, Kostas Berberidis
- Abstract summary: We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
- Score: 71.57324258813675
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a novel methodology for addressing the
hyperspectral image deconvolution problem. This problem is highly ill-posed,
and thus, requires proper priors (regularizers) to model the inherent
spectral-spatial correlations of the HSI signals. To this end, a new
optimization problem is formulated, leveraging a learnable regularizer in the
form of a neural network. To tackle this problem, an effective solver is
proposed using the half quadratic splitting methodology. The derived iterative
solver is then expressed as a fixed-point calculation problem within the Deep
Equilibrium (DEQ) framework, resulting in an interpretable architecture, with
clear explainability to its parameters and convergence properties with
practical benefits. The proposed model is a first attempt to handle the
classical HSI degradation problem with different blurring kernels and noise
levels via a single deep equilibrium model with significant computational
efficiency. Extensive numerical experiments validate the superiority of the
proposed methodology over other state-of-the-art methods. This superior
restoration performance is achieved while requiring 99.85\% less computation
time as compared to existing methods.
Related papers
- Parameter Generation of Quantum Approximate Optimization Algorithm with Diffusion Model [3.6959187484738902]
Quantum computing presents a prospect for revolutionizing the field of probabilistic optimization.
We present the Quantum Approximate Optimization Algorithm (QAOA), which is a hybrid quantum-classical algorithm.
We show that the diffusion model is capable of learning the distribution of high-performing parameters and then synthesizing new parameters closer to optimal ones.
arXiv Detail & Related papers (2024-07-17T01:18:27Z) - Adaptive Federated Learning Over the Air [108.62635460744109]
We propose a federated version of adaptive gradient methods, particularly AdaGrad and Adam, within the framework of over-the-air model training.
Our analysis shows that the AdaGrad-based training algorithm converges to a stationary point at the rate of $mathcalO( ln(T) / T 1 - frac1alpha ).
arXiv Detail & Related papers (2024-03-11T09:10:37Z) - Decentralized Sum-of-Nonconvex Optimization [42.04181488477227]
We consider the optimization problem of the sum-of-non function, i.e., a guarantee function that is the average non-consensus number.
We propose an accelerated decentralized first-order algorithm by techniques of gradient, tracking into the rate, and multi-consensus.
arXiv Detail & Related papers (2024-02-04T05:48:45Z) - Adaptive operator learning for infinite-dimensional Bayesian inverse
problems [8.672948020721945]
We develop an adaptive operator learning framework that can reduce modeling error gradually by forcing the surrogate to be accurate in local areas.
We present a rigorous convergence guarantee in the linear case using the UKI framework.
The numerical results show that our method can significantly reduce computational costs while maintaining inversion accuracy.
arXiv Detail & Related papers (2023-10-27T01:50:33Z) - On Robust Numerical Solver for ODE via Self-Attention Mechanism [82.95493796476767]
We explore training efficient and robust AI-enhanced numerical solvers with a small data size by mitigating intrinsic noise disturbances.
We first analyze the ability of the self-attention mechanism to regulate noise in supervised learning and then propose a simple-yet-effective numerical solver, Attr, which introduces an additive self-attention mechanism to the numerical solution of differential equations.
arXiv Detail & Related papers (2023-02-05T01:39:21Z) - Towards a machine learning pipeline in reduced order modelling for
inverse problems: neural networks for boundary parametrization,
dimensionality reduction and solution manifold approximation [0.0]
Inverse problems, especially in a partial differential equation context, require a huge computational load.
We apply a numerical pipeline that involves artificial neural networks to parametrize the boundary conditions of the problem in hand.
It derives a general framework capable to provide an ad-hoc parametrization of the inlet boundary and quickly converges to the optimal solution.
arXiv Detail & Related papers (2022-10-26T14:53:07Z) - A Globally Convergent Gradient-based Bilevel Hyperparameter Optimization
Method [0.0]
We propose a gradient-based bilevel method for solving the hyperparameter optimization problem.
We show that the proposed method converges with lower computation and leads to models that generalize better on the testing set.
arXiv Detail & Related papers (2022-08-25T14:25:16Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.