Leveraging Predictions in Smoothed Online Convex Optimization via
Gradient-based Algorithms
- URL: http://arxiv.org/abs/2011.12539v1
- Date: Wed, 25 Nov 2020 06:25:51 GMT
- Title: Leveraging Predictions in Smoothed Online Convex Optimization via
Gradient-based Algorithms
- Authors: Yingying Li and Na Li
- Abstract summary: We consider online convex optimization with time-varying stage costs and additional switching costs.
Since the switching costs introduce coupling across all stages, long-term predictions tend to suffer from lower quality.
We introduce a gradient-based online algorithm, Receding Horizon Inexact Gradient (RHIG), and analyze its performance by dynamic regrets.
- Score: 18.64335888217192
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider online convex optimization with time-varying stage costs and
additional switching costs. Since the switching costs introduce coupling across
all stages, multi-step-ahead (long-term) predictions are incorporated to
improve the online performance. However, longer-term predictions tend to suffer
from lower quality. Thus, a critical question is: how to reduce the impact of
long-term prediction errors on the online performance? To address this
question, we introduce a gradient-based online algorithm, Receding Horizon
Inexact Gradient (RHIG), and analyze its performance by dynamic regrets in
terms of the temporal variation of the environment and the prediction errors.
RHIG only considers at most $W$-step-ahead predictions to avoid being misled by
worse predictions in the longer term. The optimal choice of $W$ suggested by
our regret bounds depends on the tradeoff between the variation of the
environment and the prediction accuracy. Additionally, we apply RHIG to a
well-established stochastic prediction error model and provide expected regret
and concentration bounds under correlated prediction errors. Lastly, we
numerically test the performance of RHIG on quadrotor tracking problems.
Related papers
- Loss Shaping Constraints for Long-Term Time Series Forecasting [79.3533114027664]
We present a Constrained Learning approach for long-term time series forecasting that respects a user-defined upper bound on the loss at each time-step.
We propose a practical Primal-Dual algorithm to tackle it, and aims to demonstrate that it exhibits competitive average performance in time series benchmarks, while shaping the errors across the predicted window.
arXiv Detail & Related papers (2024-02-14T18:20:44Z) - Diffusion Variational Autoencoder for Tackling Stochasticity in
Multi-Step Regression Stock Price Prediction [54.21695754082441]
Multi-step stock price prediction over a long-term horizon is crucial for forecasting its volatility.
Current solutions to multi-step stock price prediction are mostly designed for single-step, classification-based predictions.
We combine a deep hierarchical variational-autoencoder (VAE) and diffusion probabilistic techniques to do seq2seq stock prediction.
Our model is shown to outperform state-of-the-art solutions in terms of its prediction accuracy and variance.
arXiv Detail & Related papers (2023-08-18T16:21:15Z) - Electricity Price Prediction for Energy Storage System Arbitrage: A
Decision-focused Approach [4.992622806418143]
Electricity price prediction plays a vital role in energy storage system (ESS) management.
Current prediction models focus on reducing prediction errors but overlook their impact on downstream decision-making.
This paper proposes a decision-focused electricity price prediction approach for ESS arbitrage to bridge the gap from the downstream optimization model to the prediction model.
arXiv Detail & Related papers (2023-04-30T00:43:26Z) - Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - Conformal Prediction Regions for Time Series using Linear
Complementarity Programming [25.094249285804224]
We propose an optimization-based method for reducing conservatism to enable long horizon planning and verification.
We show that this problem can be cast as a mixed integer linear complementarity program (MILCP), which we then relax into a linear complementarity program (LCP)
arXiv Detail & Related papers (2023-04-03T15:32:38Z) - Improved Online Conformal Prediction via Strongly Adaptive Online
Learning [86.4346936885507]
We develop new online conformal prediction methods that minimize the strongly adaptive regret.
We prove that our methods achieve near-optimal strongly adaptive regret for all interval lengths simultaneously.
Experiments show that our methods consistently obtain better coverage and smaller prediction sets than existing methods on real-world tasks.
arXiv Detail & Related papers (2023-02-15T18:59:30Z) - A Regret-Variance Trade-Off in Online Learning [14.41667013062914]
We show how variance of predictions can be exploited in learning.
In online prediction with corrupted losses, we show that the effect of corruption on the regret can be compensated by a large variance.
We extend our results to the setting of online linear regression.
arXiv Detail & Related papers (2022-06-06T14:50:19Z) - A Universal Error Measure for Input Predictions Applied to Online Graph
Problems [57.58926849872494]
We introduce a novel measure for quantifying the error in input predictions.
The measure captures errors due to absent predicted requests as well as unpredicted actual requests.
arXiv Detail & Related papers (2022-05-25T15:24:03Z) - Smoothed Online Combinatorial Optimization Using Imperfect Predictions [27.201074566335222]
We study smoothed online optimization problems when an imperfect predictive model is available.
We show that using predictions to plan for a finite time horizon leads to regret dependent on the total predictive uncertainty and an additional switching cost.
Our algorithm shows a significant improvement in cumulative regret compared to other baselines in synthetic online distributed streaming problems.
arXiv Detail & Related papers (2022-04-23T02:30:39Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Lazy Lagrangians with Predictions for Online Learning [24.18464455081512]
We consider the general problem of online convex optimization with time-varying additive constraints.
A novel primal-dual algorithm is designed by combining a Follow-The-Regularized-Leader iteration with prediction-adaptive dynamic steps.
Our work extends the FTRL framework for this constrained OCO setting and outperforms the respective state-of-the-art greedy-based solutions.
arXiv Detail & Related papers (2022-01-08T21:49:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.