DeepCO: Offline Combinatorial Optimization Framework Utilizing Deep
Learning
- URL: http://arxiv.org/abs/2007.09881v1
- Date: Mon, 20 Jul 2020 04:17:30 GMT
- Title: DeepCO: Offline Combinatorial Optimization Framework Utilizing Deep
Learning
- Authors: Wenpeng Wei, Toshiko Aizono
- Abstract summary: We propose DeepCO, an offline optimization framework utilizing deep learning.
We also design an offline variation of Travelling Salesman Problem (TSP) to model warehouse operation sequence optimization problem.
With only limited historical data, novel proposed distribution regularized optimization outperforms existing baseline method in offline experiment reducing route length by 5.7% averagely.
- Score: 1.2183405753834562
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Combinatorial optimization serves as an essential part in many modern
industrial applications. A great number of the problems are offline setting due
to safety and/or cost issues. While simulation-based approaches appear
difficult to realise for complicated systems, in this research, we propose
DeepCO, an offline combinatorial optimization framework utilizing deep
learning. We also design an offline variation of Travelling Salesman Problem
(TSP) to model warehouse operation sequence optimization problem for
evaluation. With only limited historical data, novel proposed distribution
regularized optimization method outperforms existing baseline method in offline
TSP experiment reducing route length by 5.7% averagely and shows great
potential in real world problems.
Related papers
- Offline reinforcement learning for job-shop scheduling problems [1.3927943269211593]
This paper introduces a novel offline RL method designed for optimization problems with complex constraints.
Our approach encodes actions in edge attributes and balances expected rewards with the imitation of expert solutions.
We demonstrate the effectiveness of this method on job-shop scheduling and flexible job-shop scheduling benchmarks.
arXiv Detail & Related papers (2024-10-21T07:33:42Z) - Rethinking Optimal Transport in Offline Reinforcement Learning [64.56896902186126]
In offline reinforcement learning, the data is provided by various experts and some of them can be sub-optimal.
To extract an efficient policy, it is necessary to emphstitch the best behaviors from the dataset.
We present an algorithm that aims to find a policy that maps states to a emphpartial distribution of the best expert actions for each given state.
arXiv Detail & Related papers (2024-10-17T22:36:43Z) - Offline Reinforcement Learning for Learning to Dispatch for Job Shop Scheduling [0.9831489366502301]
Job Shop Scheduling Problem (JSSP) is a complex optimization problem.
Online Reinforcement Learning (RL) has shown promise by quickly finding acceptable solutions for JSSP.
We introduce Offline Reinforcement Learning for Learning to Dispatch (Offline-LD)
arXiv Detail & Related papers (2024-09-16T15:18:10Z) - Offline Model-Based Optimization via Policy-Guided Gradient Search [30.87992788876113]
We introduce a new learning-to-search- gradient perspective for offline optimization by reformulating it as an offline reinforcement learning problem.
Our proposed policy-guided search approach explicitly learns the best policy for a given surrogate model created from the offline data.
arXiv Detail & Related papers (2024-05-08T18:27:37Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - PASTA: Pessimistic Assortment Optimization [25.51792135903357]
We consider a class of assortment optimization problems in an offline data-driven setting.
We propose an algorithm referred to as Pessimistic ASsortment opTimizAtion (PASTA) based on the principle of pessimism.
arXiv Detail & Related papers (2023-02-08T01:11:51Z) - Efficient Online Reinforcement Learning with Offline Data [78.92501185886569]
We show that we can simply apply existing off-policy methods to leverage offline data when learning online.
We extensively ablate these design choices, demonstrating the key factors that most affect performance.
We see that correct application of these simple recommendations can provide a $mathbf2.5times$ improvement over existing approaches.
arXiv Detail & Related papers (2023-02-06T17:30:22Z) - DIMES: A Differentiable Meta Solver for Combinatorial Optimization
Problems [41.57773395100222]
Deep reinforcement learning (DRL) models have shown promising results in solving NP-hard Combinatorial Optimization problems.
This paper addresses the scalability challenge in large-scale optimization by proposing a novel approach, namely, DIMES.
Unlike previous DRL methods which suffer from costly autoregressive decoding or iterative refinements of discrete solutions, DIMES introduces a compact continuous space for parameterizing the underlying distribution of candidate solutions.
Extensive experiments show that DIMES outperforms recent DRL-based methods on large benchmark datasets for Traveling Salesman Problems and Maximal Independent Set problems.
arXiv Detail & Related papers (2022-10-08T23:24:37Z) - JUMBO: Scalable Multi-task Bayesian Optimization using Offline Data [86.8949732640035]
We propose JUMBO, an MBO algorithm that sidesteps limitations by querying additional data.
We show that it achieves no-regret under conditions analogous to GP-UCB.
Empirically, we demonstrate significant performance improvements over existing approaches on two real-world optimization problems.
arXiv Detail & Related papers (2021-06-02T05:03:38Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.