Adaptive Online Optimization with Predictions: Static and Dynamic
Environments
- URL: http://arxiv.org/abs/2205.00446v1
- Date: Sun, 1 May 2022 11:03:33 GMT
- Title: Adaptive Online Optimization with Predictions: Static and Dynamic
Environments
- Authors: Pedro Zattoni Scroccaro, Arman Sharifi Kolarijani and Peyman Mohajerin
Esfahani
- Abstract summary: We propose new step-size rules and OCO algorithms that exploit gradient predictions, function predictions and dynamics.
The proposed algorithms enjoy static and dynamic regret bounds in terms of the dynamics of the reference action sequence.
We present results for both convex and strongly convex costs.
- Score: 5.553963083111226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the past few years, Online Convex Optimization (OCO) has received notable
attention in the control literature thanks to its flexible real-time nature and
powerful performance guarantees. In this paper, we propose new step-size rules
and OCO algorithms that simultaneously exploit gradient predictions, function
predictions and dynamics, features particularly pertinent to control
applications. The proposed algorithms enjoy static and dynamic regret bounds in
terms of the dynamics of the reference action sequence, gradient prediction
error and function prediction error, which are generalizations of known
regularity measures from the literature. We present results for both convex and
strongly convex costs. We validate the performance of the proposed algorithms
in a trajectory tracking case study, as well as portfolio optimization using
real-world datasets.
Related papers
- Beyond Single-Model Views for Deep Learning: Optimization versus
Generalizability of Stochastic Optimization Algorithms [13.134564730161983]
This paper adopts a novel approach to deep learning optimization, focusing on gradient descent (SGD) and its variants.
We show that SGD and its variants demonstrate performance on par with flat-minimas like SAM, albeit with half the gradient evaluations.
Our study uncovers several key findings regarding the relationship between training loss and hold-out accuracy, as well as the comparable performance of SGD and noise-enabled variants.
arXiv Detail & Related papers (2024-03-01T14:55:22Z) - Probabilistic Reduced-Dimensional Vector Autoregressive Modeling with
Oblique Projections [0.7614628596146602]
We propose a reduced-dimensional vector autoregressive model to extract low-dimensional dynamics from noisy data.
An optimal oblique decomposition is derived for the best predictability regarding prediction error covariance.
The superior performance and efficiency of the proposed approach are demonstrated using data sets from a synthesized Lorenz system and an industrial process from Eastman Chemical.
arXiv Detail & Related papers (2024-01-14T05:38:10Z) - Comparative Evaluation of Metaheuristic Algorithms for Hyperparameter
Selection in Short-Term Weather Forecasting [0.0]
This paper explores the application of metaheuristic algorithms, namely Genetic Algorithm (GA), Differential Evolution (DE) and Particle Swarm Optimization (PSO)
We evaluate their performance in weather forecasting based on metrics such as Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE)
arXiv Detail & Related papers (2023-09-05T22:13:35Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive
Control [0.0]
We show that the principal AlphaZero/TDGammon ideas of approximation in value space and rollout apply very broadly to deterministic and optimal control problems.
These ideas can be effectively integrated with other important methodologies such as model control, adaptive control, decentralized control, and neural network-based value and policy approximations.
arXiv Detail & Related papers (2021-08-20T19:17:35Z) - Deformable Linear Object Prediction Using Locally Linear Latent Dynamics [51.740998379872195]
Prediction of deformable objects (e.g., rope) is challenging due to their non-linear dynamics and infinite-dimensional configuration spaces.
We learn a locally linear, action-conditioned dynamics model that can be used to predict future latent states.
We empirically demonstrate that our approach can predict the rope state accurately up to ten steps into the future.
arXiv Detail & Related papers (2021-03-26T00:29:31Z) - Iterative Amortized Policy Optimization [147.63129234446197]
Policy networks are a central feature of deep reinforcement learning (RL) algorithms for continuous control.
From the variational inference perspective, policy networks are a form of textitamortized optimization, optimizing network parameters rather than the policy distributions directly.
We demonstrate that iterative amortized policy optimization, yields performance improvements over direct amortization on benchmark continuous control tasks.
arXiv Detail & Related papers (2020-10-20T23:25:42Z) - A Dynamical Systems Approach for Convergence of the Bayesian EM
Algorithm [59.99439951055238]
We show how (discrete-time) Lyapunov stability theory can serve as a powerful tool to aid, or even lead, in the analysis (and potential design) of optimization algorithms that are not necessarily gradient-based.
The particular ML problem that this paper focuses on is that of parameter estimation in an incomplete-data Bayesian framework via the popular optimization algorithm known as maximum a posteriori expectation-maximization (MAP-EM)
We show that fast convergence (linear or quadratic) is achieved, which could have been difficult to unveil without our adopted S&C approach.
arXiv Detail & Related papers (2020-06-23T01:34:18Z) - Stochastic batch size for adaptive regularization in deep network
optimization [63.68104397173262]
We propose a first-order optimization algorithm incorporating adaptive regularization applicable to machine learning problems in deep learning framework.
We empirically demonstrate the effectiveness of our algorithm using an image classification task based on conventional network models applied to commonly used benchmark datasets.
arXiv Detail & Related papers (2020-04-14T07:54:53Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.