OpReg-Boost: Learning to Accelerate Online Algorithms with Operator
Regression
- URL: http://arxiv.org/abs/2105.13271v1
- Date: Thu, 27 May 2021 16:17:38 GMT
- Title: OpReg-Boost: Learning to Accelerate Online Algorithms with Operator
Regression
- Authors: Nicola Bastianello, Andrea Simonetto, Emiliano Dall'Anese
- Abstract summary: This paper presents a new regularization approach -- OpReg-Boost -- to boost the convergence of online optimization and learning algorithms.
We show how to formalize the operator regression problem and propose a computationally-efficient Peaceman-Rachford solver.
- Score: 5.457279006229211
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a new regularization approach -- termed OpReg-Boost -- to
boost the convergence and lessen the asymptotic error of online optimization
and learning algorithms. In particular, the paper considers online algorithms
for optimization problems with a time-varying (weakly) convex composite cost.
For a given online algorithm, OpReg-Boost learns the closest algorithmic map
that yields linear convergence; to this end, the learning procedure hinges on
the concept of operator regression. We show how to formalize the operator
regression problem and propose a computationally-efficient Peaceman-Rachford
solver that exploits a closed-form solution of simple quadratically-constrained
quadratic programs (QCQPs). Simulation results showcase the superior properties
of OpReg-Boost w.r.t. the more classical forward-backward algorithm, FISTA, and
Anderson acceleration, and with respect to its close relative
convex-regression-boost (CvxReg-Boost) which is also novel but less performing.
Related papers
- Optimizing Tensor Computation Graphs with Equality Saturation and Monte Carlo Tree Search [0.0]
We present a tensor graph rewriting approach that uses Monte Carlo tree search to build superior representation.
Our approach improves the inference speedup of neural networks by up to 11% compared to existing methods.
arXiv Detail & Related papers (2024-10-07T22:22:02Z) - Minimax Adaptive Boosting for Online Nonparametric Regression [10.138723409205497]
We introduce a parameter-free online gradient boosting (OGB) algorithm for adversarial online nonparametric regression.
We show that its application to chaining trees achieves minimax optimal regret when competing against Lipschitz functions.
arXiv Detail & Related papers (2024-10-04T12:30:03Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Online Dynamic Submodular Optimization [0.0]
We propose new algorithms with provable performance for online binary optimization.
We numerically test our algorithms in two power system applications: fast-timescale demand response and real-time distribution network reconfiguration.
arXiv Detail & Related papers (2023-06-19T10:37:15Z) - Online Learning Under A Separable Stochastic Approximation Framework [20.26530917721778]
We propose an online learning algorithm for a class of machine learning models under a separable approximation framework.
We show that the proposed algorithm produces more robust and test performance when compared to other popular learning algorithms.
arXiv Detail & Related papers (2023-05-12T13:53:03Z) - Implicit Parameter-free Online Learning with Truncated Linear Models [51.71216912089413]
parameter-free algorithms are online learning algorithms that do not require setting learning rates.
We propose new parameter-free algorithms that can take advantage of truncated linear models through a new update that has an "implicit" flavor.
Based on a novel decomposition of the regret, the new update is efficient, requires only one gradient at each step, never overshoots the minimum of the truncated model, and retains the favorable parameter-free properties.
arXiv Detail & Related papers (2022-03-19T13:39:49Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - SHINE: SHaring the INverse Estimate from the forward pass for bi-level
optimization and implicit models [15.541264326378366]
In recent years, implicit deep learning has emerged as a method to increase the depth of deep neural networks.
The training is performed as a bi-level problem, and its computational complexity is partially driven by the iterative inversion of a huge Jacobian matrix.
We propose a novel strategy to tackle this computational bottleneck from which many bi-level problems suffer.
arXiv Detail & Related papers (2021-06-01T15:07:34Z) - Boosting for Online Convex Optimization [64.15578413206715]
We consider the decision-making framework of online convex optimization with a large number of experts.
We define a weak learning algorithm as a mechanism that guarantees approximate regret against a base class of experts.
We give an efficient boosting algorithm that guarantees near-optimal regret against the convex hull of the base class.
arXiv Detail & Related papers (2021-02-18T12:30:49Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Proximal Gradient Algorithm with Momentum and Flexible Parameter Restart
for Nonconvex Optimization [73.38702974136102]
Various types of parameter restart schemes have been proposed for accelerated algorithms to facilitate their practical convergence in rates.
In this paper, we propose an algorithm for solving nonsmooth problems.
arXiv Detail & Related papers (2020-02-26T16:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.