COEP: Cascade Optimization for Inverse Problems with Entropy-Preserving
Hyperparameter Tuning
- URL: http://arxiv.org/abs/2210.13983v2
- Date: Wed, 26 Oct 2022 14:13:38 GMT
- Title: COEP: Cascade Optimization for Inverse Problems with Entropy-Preserving
Hyperparameter Tuning
- Authors: Tianci Liu, Tong Yang, Quan Zhang, Qi Lei
- Abstract summary: COEP is an automated and principled framework to solve inverse problems with deep generative models.
We empirically validate the strength of COEP on denoising and noisy compressed sensing.
- Score: 26.531298115933
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose COEP, an automated and principled framework to solve inverse
problems with deep generative models. COEP consists of two components, a
cascade algorithm for optimization and an entropy-preserving criterion for
hyperparameter tuning. Through COEP, the two components build up an efficient
and end-to-end solver for inverse problems that require no human evaluation. We
establish theoretical guarantees for the proposed methods. We also empirically
validate the strength of COEP on denoising and noisy compressed sensing, which
are two fundamental tasks in inverse problems.
Related papers
- Warm-up Free Policy Optimization: Improved Regret in Linear Markov Decision Processes [12.76843681997386]
Policy Optimization (PO) methods are among the most popular Reinforcement Learning (RL) algorithms in practice.
This paper proposes a PO-based algorithm with rate-optimal regret guarantees under the linear Markov Decision Process (MDP) model.
Our algorithm achieves regret with improved dependence on the other parameters of the problem.
arXiv Detail & Related papers (2024-07-03T12:36:24Z) - Quantum Annealing Solutions for the Closest String Problem with D-Wave
Systems [0.0]
Closest String Problem is an NP-complete problem which appears more commonly in bioinformatics and coding theory.
Two QUBO formulations have been proposed, with one being a slight modification over the other.
DWave annealers have been used, while providing guidelines for optimality on certain platform-specific concerns.
arXiv Detail & Related papers (2023-10-19T16:03:25Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Meta-Learning Digitized-Counterdiabatic Quantum Optimization [3.0638256603183054]
We tackle the problem of finding suitable initial parameters for variational optimization by employing a meta-learning technique using recurrent neural networks.
We investigate this technique with the recently proposed digitized-counterdiabatic quantum approximate optimization algorithm (DC-QAOA)
The combination of meta learning and DC-QAOA enables us to find optimal initial parameters for different models, such as MaxCut problem and the Sherrington-Kirkpatrick model.
arXiv Detail & Related papers (2022-06-20T18:57:50Z) - Multi-objective robust optimization using adaptive surrogate models for
problems with mixed continuous-categorical parameters [0.0]
Robust design optimization is traditionally considered when uncertainties are mainly affecting the objective function.
The resulting nested optimization problem may be solved using a general-purpose solver, herein the non-dominated sorting genetic algorithm (NSGA-II)
The proposed approach consists of sequentially carrying out NSGA-II while using an adaptively built Kriging model to estimate the quantiles.
arXiv Detail & Related papers (2022-03-03T20:23:18Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Objective-Sensitive Principal Component Analysis for High-Dimensional
Inverse Problems [0.0]
We present a novel approach for adaptive, differentiable parameterization of large-scale random fields.
The developed technique is based on principal component analysis (PCA) but modifies a purely data-driven basis of principal components considering objective function behavior.
Three algorithms for optimal parameter decomposition are presented and applied to an objective of 2D synthetic history matching.
arXiv Detail & Related papers (2020-06-02T18:51:17Z) - Cross Entropy Hyperparameter Optimization for Constrained Problem
Hamiltonians Applied to QAOA [68.11912614360878]
Hybrid quantum-classical algorithms such as Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications.
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical parameter to find better parameters more easily and hence results in an improved performance.
arXiv Detail & Related papers (2020-03-11T13:52:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.