Safeguarded Learned Convex Optimization
- URL: http://arxiv.org/abs/2003.01880v3
- Date: Mon, 26 Sep 2022 19:18:12 GMT
- Title: Safeguarded Learned Convex Optimization
- Authors: Howard Heaton and Xiaohan Chen and Zhangyang Wang and Wotao Yin
- Abstract summary: Analytic optimization algorithms can be hand-designed to provably solve problems in an iterative fashion.
Data-driven algorithms can "learn to optimize" (L2O) with much fewer iterations and similar cost per iteration as general-purpose optimization algorithms.
We present a Safe-L2O framework to fuse the advantages of these approaches.
- Score: 106.81731132086851
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Applications abound in which optimization problems must be repeatedly solved,
each time with new (but similar) data. Analytic optimization algorithms can be
hand-designed to provably solve these problems in an iterative fashion. On one
hand, data-driven algorithms can "learn to optimize" (L2O) with much fewer
iterations and similar cost per iteration as general-purpose optimization
algorithms. On the other hand, unfortunately, many L2O algorithms lack converge
guarantees. To fuse the advantages of these approaches, we present a Safe-L2O
framework. Safe-L2O updates incorporate a safeguard to guarantee convergence
for convex problems with proximal and/or gradient oracles. The safeguard is
simple and computationally cheap to implement, and it is activated only when
the data-driven L2O updates would perform poorly or appear to diverge. This
yields the numerical benefits of employing machine learning to create rapid L2O
algorithms while still guaranteeing convergence. Our numerical examples show
convergence of Safe-L2O algorithms, even when the provided data is not from the
distribution of training data.
Related papers
- Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - A Simple Guard for Learned Optimizers [0.0]
We propose a new class of Safeguarded L2O, called Loss-Guarded L2O (LGL2O)
Safeguarded L2O can take a learned algorithm and safeguard it with a generic learning algorithm so that by conditionally switching between the two, the resulting algorithm is provably convergent.
We show theoretical proof of LGL2O's convergence guarantee and empirical results comparing to GL2O.
arXiv Detail & Related papers (2022-01-28T21:32:28Z) - Learning to Optimize: A Primer and A Benchmark [94.29436694770953]
Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods.
This article is poised to be the first comprehensive survey and benchmark of L2O for continuous optimization.
arXiv Detail & Related papers (2021-03-23T20:46:20Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - New Oracle-Efficient Algorithms for Private Synthetic Data Release [52.33506193761153]
We present three new algorithms for constructing differentially private synthetic data.
The algorithms satisfy differential privacy even in the worst case.
Compared to the state-of-the-art method High-Dimensional Matrix Mechanism citeMcKennaMHM18, our algorithms provide better accuracy in the large workload.
arXiv Detail & Related papers (2020-07-10T15:46:05Z) - Provably Convergent Working Set Algorithm for Non-Convex Regularized
Regression [0.0]
This paper proposes a working set algorithm for non-regular regularizers with convergence guarantees.
Our results demonstrate high gain compared to the full problem solver for both block-coordinates or a gradient solver.
arXiv Detail & Related papers (2020-06-24T07:40:31Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.