Introduction to Online Convex Optimization
- URL: http://arxiv.org/abs/1909.05207v3
- Date: Sun, 6 Aug 2023 14:24:26 GMT
- Title: Introduction to Online Convex Optimization
- Authors: Elad Hazan
- Abstract summary: This manuscript portrays optimization as a process.
In many practical applications the environment is so complex that it is infeasible to lay out a comprehensive theoretical model.
- Score: 31.771131314017385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This manuscript portrays optimization as a process. In many practical
applications the environment is so complex that it is infeasible to lay out a
comprehensive theoretical model and use classical algorithmic theory and
mathematical optimization. It is necessary as well as beneficial to take a
robust approach, by applying an optimization method that learns as one goes
along, learning from experience as more aspects of the problem are observed.
This view of optimization as a process has become prominent in varied fields
and has led to some spectacular success in modeling and systems that are now
part of our daily lives.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Enhancing Deep Learning with Optimized Gradient Descent: Bridging Numerical Methods and Neural Network Training [0.036651088217486416]
This paper explores the relationship between optimization theory and deep learning.
We introduce an enhancement to the descent algorithm, highlighting its variants, which are the cornerstone of neural networks.
Our experiments on diverse deep learning tasks substantiate the improved algorithm's efficacy.
arXiv Detail & Related papers (2024-09-07T04:37:20Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Socio-cognitive Optimization of Time-delay Control Problems using
Evolutionary Metaheuristics [89.24951036534168]
Metaheuristics are universal optimization algorithms which should be used for solving difficult problems, unsolvable by classic approaches.
In this paper we aim at constructing novel socio-cognitive metaheuristic based on castes, and apply several versions of this algorithm to optimization of time-delay system model.
arXiv Detail & Related papers (2022-10-23T22:21:10Z) - Teaching Networks to Solve Optimization Problems [13.803078209630444]
We propose to replace the iterative solvers altogether with a trainable parametric set function.
We show the feasibility of learning such parametric (set) functions to solve various classic optimization problems.
arXiv Detail & Related papers (2022-02-08T19:13:13Z) - Tutorial on amortized optimization [13.60842910539914]
This tutorial presents an introduction to the amortized optimization foundations behind these advancements.
It overviews their applications in variational inference, sparse coding, gradient-based meta-learning, control, reinforcement learning, convex optimization, optimal transport, and deep equilibrium networks.
arXiv Detail & Related papers (2022-02-01T18:58:33Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.