Learning for Robust Combinatorial Optimization: Algorithm and
Application
- URL: http://arxiv.org/abs/2112.10377v1
- Date: Mon, 20 Dec 2021 07:58:50 GMT
- Title: Learning for Robust Combinatorial Optimization: Algorithm and
Application
- Authors: Zhihui Shao and Jianyi Yang and Cong Shen and Shaolei Ren
- Abstract summary: Learning to optimize (L2O) has emerged as a promising approach to solving optimization problems by exploiting the strong prediction power of neural networks.
In this paper, we propose a novel learning-based optimization, called LRCO, which quickly outputs a robust solution in the presence of uncertain context.
Our results highlight that LRCO can greatly reduce the worst-case cost and runtime, while having a very low complexity.
- Score: 26.990988571097827
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning to optimize (L2O) has recently emerged as a promising approach to
solving optimization problems by exploiting the strong prediction power of
neural networks and offering lower runtime complexity than conventional
solvers. While L2O has been applied to various problems, a crucial yet
challenging class of problems -- robust combinatorial optimization in the form
of minimax optimization -- have largely remained under-explored. In addition to
the exponentially large decision space, a key challenge for robust
combinatorial optimization lies in the inner optimization problem, which is
typically non-convex and entangled with outer optimization. In this paper, we
study robust combinatorial optimization and propose a novel learning-based
optimizer, called LRCO (Learning for Robust Combinatorial Optimization), which
quickly outputs a robust solution in the presence of uncertain context. LRCO
leverages a pair of learning-based optimizers -- one for the minimizer and the
other for the maximizer -- that use their respective objective functions as
losses and can be trained without the need of labels for training problem
instances. To evaluate the performance of LRCO, we perform simulations for the
task offloading problem in vehicular edge computing. Our results highlight that
LRCO can greatly reduce the worst-case cost and improve robustness, while
having a very low runtime complexity.
Related papers
- Self-Supervised Learning of Iterative Solvers for Constrained Optimization [0.0]
We propose a learning-based iterative solver for constrained optimization.
It can obtain very fast and accurate solutions by customizing the solver to a specific parametric optimization problem.
A novel loss function based on the Karush-Kuhn-Tucker conditions of optimality is introduced, enabling fully self-supervised training of both neural networks.
arXiv Detail & Related papers (2024-09-12T14:17:23Z) - Iterative or Innovative? A Problem-Oriented Perspective for Code Optimization [81.88668100203913]
Large language models (LLMs) have demonstrated strong capabilities in solving a wide range of programming tasks.
In this paper, we explore code optimization with a focus on performance enhancement, specifically aiming to optimize code for minimal execution time.
arXiv Detail & Related papers (2024-06-17T16:10:10Z) - Landscape Surrogate: Learning Decision Losses for Mathematical
Optimization Under Partial Information [48.784330281177446]
Recent works in learning-integrated optimization have shown promise in settings where the optimization is only partially observed or where general-purposes perform poorly without expert tuning.
We propose using a smooth and learnable Landscape Surrogate as a replacement for $fcirc mathbfg$.
This surrogate, learnable by neural networks, can be computed faster than the $mathbfg$ solver, provides dense and smooth gradients during training, can generalize to unseen optimization problems, and is efficiently learned via alternating optimization.
arXiv Detail & Related papers (2023-07-18T04:29:16Z) - Approaching Globally Optimal Energy Efficiency in Interference Networks
via Machine Learning [22.926877147296594]
This work presents a machine learning approach to optimize the energy efficiency (EE) in a multi-cell wireless network.
Results show that the method achieves an EE close to the optimum by the branch-and- computation testing.
arXiv Detail & Related papers (2022-11-25T08:36:34Z) - Learning Adaptive Evolutionary Computation for Solving Multi-Objective
Optimization Problems [3.3266268089678257]
This paper proposes a framework that integrates MOEAs with adaptive parameter control using Deep Reinforcement Learning (DRL)
The DRL policy is trained to adaptively set the values that dictate the intensity and probability of mutation for solutions during optimization.
We show the learned policy is transferable, i.e., the policy trained on a simple benchmark problem can be directly applied to solve the complex warehouse optimization problem.
arXiv Detail & Related papers (2022-11-01T22:08:34Z) - Teaching Networks to Solve Optimization Problems [13.803078209630444]
We propose to replace the iterative solvers altogether with a trainable parametric set function.
We show the feasibility of learning such parametric (set) functions to solve various classic optimization problems.
arXiv Detail & Related papers (2022-02-08T19:13:13Z) - Learning to Optimize: A Primer and A Benchmark [94.29436694770953]
Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods.
This article is poised to be the first comprehensive survey and benchmark of L2O for continuous optimization.
arXiv Detail & Related papers (2021-03-23T20:46:20Z) - Learning to Optimize Under Constraints with Unsupervised Deep Neural
Networks [0.0]
We propose a machine learning (ML) method to learn how to solve a generic constrained continuous optimization problem.
In this paper, we propose an unsupervised deep learning (DL) solution for solving constrained optimization problems in real-time.
arXiv Detail & Related papers (2021-01-04T02:58:37Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z) - Optimizing Wireless Systems Using Unsupervised and
Reinforced-Unsupervised Deep Learning [96.01176486957226]
Resource allocation and transceivers in wireless networks are usually designed by solving optimization problems.
In this article, we introduce unsupervised and reinforced-unsupervised learning frameworks for solving both variable and functional optimization problems.
arXiv Detail & Related papers (2020-01-03T11:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.