Universal Online Optimization in Dynamic Environments via Uniclass
Prediction
- URL: http://arxiv.org/abs/2302.06066v1
- Date: Mon, 13 Feb 2023 03:00:45 GMT
- Title: Universal Online Optimization in Dynamic Environments via Uniclass
Prediction
- Authors: Arnold Salas
- Abstract summary: We propose a novel and intuitive framework for universal online optimization in dynamic environments.
Our strategy does not rely on the construction of a set of experts and an accompanying meta-algorithm.
This is the first paper proposing a universal approach with state-of-the-art dynamic regret guarantees even for general convex cost functions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, several universal methods have been proposed for online convex
optimization which can handle convex, strongly convex and exponentially concave
cost functions simultaneously. However, most of these algorithms have been
designed with static regret minimization in mind, but this notion of regret may
not be suitable for changing environments. To address this shortcoming, we
propose a novel and intuitive framework for universal online optimization in
dynamic environments. Unlike existing universal algorithms, our strategy does
not rely on the construction of a set of experts and an accompanying
meta-algorithm. Instead, we show that the problem of dynamic online
optimization can be reduced to a uniclass prediction problem. By leaving the
choice of uniclass loss function in the user's hands, they are able to control
and optimize dynamic regret bounds, which in turn carry over into the original
problem. To the best of our knowledge, this is the first paper proposing a
universal approach with state-of-the-art dynamic regret guarantees even for
general convex cost functions.
Related papers
- Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Introduction to Online Nonstochastic Control [34.77535508151501]
In online nonstochastic control, both the cost functions as well as the perturbations from the assumed dynamical model are chosen by an adversary.
The target is to attain low regret against the best policy in hindsight from a benchmark class of policies.
arXiv Detail & Related papers (2022-11-17T16:12:45Z) - Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization [70.4342220499858]
We introduce novel online algorithms that can exploit smoothness and replace the dependence on $T$ in dynamic regret with problem-dependent quantities.
Our results are adaptive to the intrinsic difficulty of the problem, since the bounds are tighter than existing results for easy problems and safeguard the same rate in the worst case.
arXiv Detail & Related papers (2021-12-29T02:42:59Z) - On Constraints in First-Order Optimization: A View from Non-Smooth
Dynamical Systems [99.59934203759754]
We introduce a class of first-order methods for smooth constrained optimization.
Two distinctive features of our approach are that projections or optimizations over the entire feasible set are avoided.
The resulting algorithmic procedure is simple to implement even when constraints are nonlinear.
arXiv Detail & Related papers (2021-07-17T11:45:13Z) - SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients [99.13839450032408]
It is desired to design a universal framework for adaptive algorithms to solve general problems.
In particular, our novel framework provides adaptive methods under non convergence support for setting.
arXiv Detail & Related papers (2021-06-15T15:16:28Z) - A Simple yet Universal Strategy for Online Convex Optimization [97.64589722655612]
We propose a simple strategy for universal online convex optimization.
The key idea is to construct a set of experts to process the original online functions, and deploy a meta-algorithm over the emphlinearized losses.
Our strategy inherits the theoretical guarantee of any expert designed for strongly convex functions and exponentially concave functions.
arXiv Detail & Related papers (2021-05-08T11:43:49Z) - An Online Prediction Approach Based on Incremental Support Vector
Machine for Dynamic Multiobjective Optimization [19.336520152294213]
We propose a novel prediction algorithm based on incremental support vector machine (ISVM)
We treat the solving of dynamic multiobjective optimization problems (DMOPs) as an online learning process.
The proposed algorithm can effectively tackle dynamic multiobjective optimization problems.
arXiv Detail & Related papers (2021-02-24T08:51:23Z) - Minimizing Dynamic Regret and Adaptive Regret Simultaneously [60.17824125301273]
We propose novel online algorithms that are able to minimize the dynamic regret and adaptive regret simultaneously.
Our theoretical guarantee is even stronger in the sense that one algorithm is able to minimize the dynamic regret over any interval.
arXiv Detail & Related papers (2020-02-06T03:32:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.