Landscape Surrogate: Learning Decision Losses for Mathematical
Optimization Under Partial Information
- URL: http://arxiv.org/abs/2307.08964v2
- Date: Thu, 2 Nov 2023 23:00:31 GMT
- Title: Landscape Surrogate: Learning Decision Losses for Mathematical
Optimization Under Partial Information
- Authors: Arman Zharmagambetov, Brandon Amos, Aaron Ferber, Taoan Huang, Bistra
Dilkina, Yuandong Tian
- Abstract summary: Recent works in learning-integrated optimization have shown promise in settings where the optimization is only partially observed or where general-purposes perform poorly without expert tuning.
We propose using a smooth and learnable Landscape Surrogate as a replacement for $fcirc mathbfg$.
This surrogate, learnable by neural networks, can be computed faster than the $mathbfg$ solver, provides dense and smooth gradients during training, can generalize to unseen optimization problems, and is efficiently learned via alternating optimization.
- Score: 48.784330281177446
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent works in learning-integrated optimization have shown promise in
settings where the optimization problem is only partially observed or where
general-purpose optimizers perform poorly without expert tuning. By learning an
optimizer $\mathbf{g}$ to tackle these challenging problems with $f$ as the
objective, the optimization process can be substantially accelerated by
leveraging past experience. The optimizer can be trained with supervision from
known optimal solutions or implicitly by optimizing the compound function
$f\circ \mathbf{g}$. The implicit approach may not require optimal solutions as
labels and is capable of handling problem uncertainty; however, it is slow to
train and deploy due to frequent calls to optimizer $\mathbf{g}$ during both
training and testing. The training is further challenged by sparse gradients of
$\mathbf{g}$, especially for combinatorial solvers. To address these
challenges, we propose using a smooth and learnable Landscape Surrogate $M$ as
a replacement for $f\circ \mathbf{g}$. This surrogate, learnable by neural
networks, can be computed faster than the solver $\mathbf{g}$, provides dense
and smooth gradients during training, can generalize to unseen optimization
problems, and is efficiently learned via alternating optimization. We test our
approach on both synthetic problems, including shortest path and
multidimensional knapsack, and real-world problems such as portfolio
optimization, achieving comparable or superior objective values compared to
state-of-the-art baselines while reducing the number of calls to $\mathbf{g}$.
Notably, our approach outperforms existing methods for computationally
expensive high-dimensional problems.
Related papers
- Simulating, Fast and Slow: Learning Policies for Black-Box Optimization [12.925033436442925]
This paper introduces a novel method for solving classes of similar black-box optimization problems by learning an active learning policy.
After training the policy, downstream optimization of problems involving black-box simulators requires up to $sim$90% fewer expensive simulator calls.
arXiv Detail & Related papers (2024-06-06T17:05:09Z) - Accelerated Gradient Algorithms with Adaptive Subspace Search for
Instance-Faster Optimization [6.896308995955336]
Gradient-based minimax optimal algorithms have promoted the development of continuous optimization and machine learning.
In this paper, we open up a new way to design and analyze gradient-based algorithms with direct applications in machine learning.
arXiv Detail & Related papers (2023-12-06T01:16:10Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Optimal Stochastic Non-smooth Non-convex Optimization through
Online-to-Non-convex Conversion [56.92236659731376]
We present new algorithms for optimizing non-known, non-smooth objectives based on a novel analysis technique.
For deterministic second-order smooth objectives, applying advanced optimistic online learning techniques enables a new $O(delta0.5) all$ to recover optimal or best-known results.
arXiv Detail & Related papers (2023-02-07T22:09:20Z) - Teaching Networks to Solve Optimization Problems [13.803078209630444]
We propose to replace the iterative solvers altogether with a trainable parametric set function.
We show the feasibility of learning such parametric (set) functions to solve various classic optimization problems.
arXiv Detail & Related papers (2022-02-08T19:13:13Z) - BiAdam: Fast Adaptive Bilevel Optimization Methods [104.96004056928474]
Bilevel optimization has attracted increased interest in machine learning due to its many applications.
We provide a useful analysis framework for both the constrained and unconstrained optimization.
arXiv Detail & Related papers (2021-06-21T20:16:40Z) - Learning to Optimize Under Constraints with Unsupervised Deep Neural
Networks [0.0]
We propose a machine learning (ML) method to learn how to solve a generic constrained continuous optimization problem.
In this paper, we propose an unsupervised deep learning (DL) solution for solving constrained optimization problems in real-time.
arXiv Detail & Related papers (2021-01-04T02:58:37Z) - Divide and Learn: A Divide and Conquer Approach for Predict+Optimize [50.03608569227359]
The predict+optimize problem combines machine learning ofproblem coefficients with a optimization prob-lem that uses the predicted coefficients.
We show how to directlyexpress the loss of the optimization problem in terms of thepredicted coefficients as a piece-wise linear function.
We propose a novel divide and algorithm to tackle optimization problems without this restriction and predict itscoefficients using the optimization loss.
arXiv Detail & Related papers (2020-12-04T00:26:56Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.