Online Convex Optimization with Binary Constraints
- URL: http://arxiv.org/abs/2005.02274v3
- Date: Fri, 19 Feb 2021 21:00:44 GMT
- Title: Online Convex Optimization with Binary Constraints
- Authors: Antoine Lesage-Landry, Joshua A. Taylor, Duncan S. Callaway
- Abstract summary: We consider online optimization with binary decision variables and convex loss functions.
We design a new algorithm, binary online gradient descent (bOGD) and bound its expected dynamic regret.
We test the performance of bOGD in several simulations based on demand response.
- Score: 0.04170934882758551
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider online optimization with binary decision variables and convex
loss functions. We design a new algorithm, binary online gradient descent
(bOGD) and bound its expected dynamic regret. We provide a regret bound that
holds for any time horizon and a specialized bound for finite time horizons.
First, we present the regret as the sum of the relaxed, continuous round
optimum tracking error and the rounding error of our update in which the former
asymptomatically decreases with time under certain conditions. Then, we derive
a finite-time bound that is sublinear in time and linear in the cumulative
variation of the relaxed, continuous round optima. We apply bOGD to demand
response with thermostatically controlled loads, in which binary constraints
model discrete on/off settings. We also model uncertainty and varying load
availability, which depend on temperature deadbands, lockout of cooling units
and manual overrides. We test the performance of bOGD in several simulations
based on demand response. The simulations corroborate that the use of
randomization in bOGD does not significantly degrade performance while making
the problem more tractable.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Trajectory Flow Matching with Applications to Clinical Time Series Modeling [77.58277281319253]
Trajectory Flow Matching (TFM) trains a Neural SDE in a simulation-free manner, bypassing backpropagation through the dynamics.
We demonstrate improved performance on three clinical time series datasets in terms of absolute performance and uncertainty prediction.
arXiv Detail & Related papers (2024-10-28T15:54:50Z) - On tracking varying bounds when forecasting bounded time series [0.0]
We consider a new framework where, though bounded, random variable has unobserved bounds that vary over time.
We introduce a loglike estimation to track the bound online likelihood estimation.
arXiv Detail & Related papers (2023-06-23T10:44:49Z) - Online Dynamic Submodular Optimization [0.0]
We propose new algorithms with provable performance for online binary optimization.
We numerically test our algorithms in two power system applications: fast-timescale demand response and real-time distribution network reconfiguration.
arXiv Detail & Related papers (2023-06-19T10:37:15Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Online Convex Optimization with Stochastic Constraints: Zero Constraint
Violation and Bandit Feedback [0.0]
We propose a variant of the drift-plus-penalty algorithm that guarantees $O(sqrtT)$ expected regret and zero constraint violation.
Our algorithm is oblivious to the length of the time horizon $T$, in contrast to the vanilla drift-plus-penalty method.
arXiv Detail & Related papers (2023-01-26T18:04:26Z) - Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization [70.4342220499858]
We introduce novel online algorithms that can exploit smoothness and replace the dependence on $T$ in dynamic regret with problem-dependent quantities.
Our results are adaptive to the intrinsic difficulty of the problem, since the bounds are tighter than existing results for easy problems and safeguard the same rate in the worst case.
arXiv Detail & Related papers (2021-12-29T02:42:59Z) - Delay-Tolerant Constrained OCO with Application to Network Resource
Allocation [44.67787270821051]
We consider online convex optimization (OCO) with multi-slot feedback delay.
An agent makes a sequence of online decisions to minimize the accumulation of time-varying convex loss functions.
We propose Delay-Tolerant Constrained-OCO, which uses a novel constraint penalty with double regularization to tackle the asynchrony between information feedback and decision updates.
arXiv Detail & Related papers (2021-05-09T19:32:33Z) - Dynamic Regret of Convex and Smooth Functions [93.71361250701075]
We investigate online convex optimization in non-stationary environments.
We choose the dynamic regret as the performance measure.
We show that it is possible to further enhance the dynamic regret by exploiting the smoothness condition.
arXiv Detail & Related papers (2020-07-07T14:10:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.