First-Order Dynamic Optimization for Streaming Convex Costs
- URL: http://arxiv.org/abs/2310.07925v1
- Date: Wed, 11 Oct 2023 22:41:00 GMT
- Title: First-Order Dynamic Optimization for Streaming Convex Costs
- Authors: M. Rostami, H. Moradian, and S. S. Kia
- Abstract summary: We develop an approach to track the optimal solution with a bounded error.
Our algorithm is executed only by using the first-order derivatives of the cost function.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a set of novel optimization algorithms for solving a
class of convex optimization problems with time-varying streaming cost
function. We develop an approach to track the optimal solution with a bounded
error. Unlike the existing results, our algorithm is executed only by using the
first-order derivatives of the cost function which makes it computationally
efficient for optimization with time-varying cost function. We compare our
algorithms to the gradient descent algorithm and show why gradient descent is
not an effective solution for optimization problems with time-varying cost.
Several examples including solving a model predictive control problem cast as a
convex optimization problem with a streaming time-varying cost function
demonstrate our results.
Related papers
- Online Convex Optimization with Memory and Limited Predictions [19.7248150754102]
We study the problem of online convex optimization with memory and predictions over a horizon $T$.
We propose an algorithm to solve this problem and show that the dynamic regret of the algorithm decays exponentially with the prediction window length.
arXiv Detail & Related papers (2024-10-31T02:33:47Z) - Time-Varying Convex Optimization with $O(n)$ Computational Complexity [0.0]
We consider the problem of unconstrained convex optimization, where the cost function changes with time.
Our proposed algorithms only require the first-order derivatives of the cost function with respect to the decision variable.
Specifically, the proposed algorithms reduce computational cost from $(n3)$ to $O(n)$ per timestep.
arXiv Detail & Related papers (2024-10-19T06:45:05Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Optimizing Optimizers: Regret-optimal gradient descent algorithms [9.89901717499058]
We study the existence, uniqueness and consistency of regret-optimal algorithms.
By providing first-order optimality conditions for the control problem, we show that regret-optimal algorithms must satisfy a specific structure in their dynamics.
We present fast numerical methods for approximating them, generating optimization algorithms which directly optimize their long-term regret.
arXiv Detail & Related papers (2020-12-31T19:13:53Z) - Divide and Learn: A Divide and Conquer Approach for Predict+Optimize [50.03608569227359]
The predict+optimize problem combines machine learning ofproblem coefficients with a optimization prob-lem that uses the predicted coefficients.
We show how to directlyexpress the loss of the optimization problem in terms of thepredicted coefficients as a piece-wise linear function.
We propose a novel divide and algorithm to tackle optimization problems without this restriction and predict itscoefficients using the optimization loss.
arXiv Detail & Related papers (2020-12-04T00:26:56Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.