Matrix Encoding Networks for Neural Combinatorial Optimization
- URL: http://arxiv.org/abs/2106.11113v1
- Date: Mon, 21 Jun 2021 13:50:23 GMT
- Title: Matrix Encoding Networks for Neural Combinatorial Optimization
- Authors: Yeong-Dae Kwon, Jinho Choo, Iljoo Yoon, Minah Park, Duwon Park,
Youngjune Gwon
- Abstract summary: A popular approach is to use a neural net to compute on the parameters of a given optimization (CO) problem.
There is currently no neural net model that takes in such matrix-style relationship data as an input.
In this paper, we show how conveniently it takes in and processes parameters of such complex CO problems.
- Score: 5.524431376262751
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine Learning (ML) can help solve combinatorial optimization (CO) problems
better. A popular approach is to use a neural net to compute on the parameters
of a given CO problem and extract useful information that guides the search for
good solutions. Many CO problems of practical importance can be specified in a
matrix form of parameters quantifying the relationship between two groups of
items. There is currently no neural net model, however, that takes in such
matrix-style relationship data as an input. Consequently, these types of CO
problems have been out of reach for ML engineers. In this paper, we introduce
Matrix Encoding Network (MatNet) and show how conveniently it takes in and
processes parameters of such complex CO problems. Using an end-to-end model
based on MatNet, we solve asymmetric traveling salesman (ATSP) and flexible
flow shop (FFSP) problems as the earliest neural approach. In particular, for a
class of FFSP we have tested MatNet on, we demonstrate a far superior empirical
performance to any methods (neural or not) known to date.
Related papers
- Learning to Solve Combinatorial Optimization under Positive Linear Constraints via Non-Autoregressive Neural Networks [103.78912399195005]
Combinatorial optimization (CO) is the fundamental problem at the intersection of computer science, applied mathematics, etc.
In this paper, we design a family of non-autoregressive neural networks to solve CO problems under positive linear constraints.
We validate the effectiveness of this framework in solving representative CO problems including facility location, max-set covering, and traveling salesman problem.
arXiv Detail & Related papers (2024-09-06T14:58:31Z) - Erasure Coded Neural Network Inference via Fisher Averaging [28.243239815823205]
Erasure-coded computing has been successfully used in cloud systems to reduce tail latency caused by factors such as straggling servers and heterogeneous traffic variations.
We design a method to code over neural networks, that is, given two or more neural network models, how to construct a coded model whose output is a linear combination of the outputs of the given neural networks.
We conduct experiments to perform erasure coding over neural networks trained on real-world vision datasets and show that the accuracy of the decoded outputs using COIN is significantly higher than other baselines.
arXiv Detail & Related papers (2024-09-02T18:46:26Z) - Dynamically configured physics-informed neural network in topology
optimization applications [4.403140515138818]
The physics-informed neural network (PINN) can avoid generating enormous amounts of data when solving forward problems.
A dynamically configured PINN-based topology optimization (DCPINN-TO) method is proposed.
The accuracy of the displacement prediction and optimization results indicate that the DCPINN-TO method is effective and efficient.
arXiv Detail & Related papers (2023-12-12T05:35:30Z) - Differentiable Visual Computing for Inverse Problems and Machine
Learning [27.45555082573493]
Visual computing methods are used to analyze geometry, physically simulate solids, fluids, and other media, and render the world via optical techniques.
Deep learning (DL) allows for the construction of general algorithmic models, side stepping the need for a purely first principles-based approach to problem solving.
DL is powered by highly parameterized neural network architectures -- universal function approximators -- and gradient-based search algorithms.
arXiv Detail & Related papers (2023-11-21T23:02:58Z) - Large-Scale OD Matrix Estimation with A Deep Learning Method [70.78575952309023]
The proposed method integrates deep learning and numerical optimization algorithms to infer matrix structure and guide numerical optimization.
We conducted tests to demonstrate the good generalization performance of our method on a large-scale synthetic dataset.
arXiv Detail & Related papers (2023-10-09T14:30:06Z) - VN-Solver: Vision-based Neural Solver for Combinatorial Optimization
over Graphs [6.457205049532316]
We explore a vision-based method that is conceptually novel: can neural models solve graph optimization problems by textittaking a look at the graph pattern?
Our results suggest that the performance of such vision-based methods is not only non-trivial but also comparable to the state-of-the-art matrix-based methods, which opens a new avenue for developing data-driven optimization solvers.
arXiv Detail & Related papers (2023-08-06T18:33:11Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - A Differentiable Approach to Combinatorial Optimization using Dataless
Neural Networks [20.170140039052455]
We propose a radically different approach in that no data is required for training the neural networks that produce the solution.
In particular, we reduce the optimization problem to a neural network and employ a dataless training scheme to refine the parameters of the network such that those parameters yield the structure of interest.
arXiv Detail & Related papers (2022-03-15T19:21:31Z) - Learning from Images: Proactive Caching with Parallel Convolutional
Neural Networks [94.85780721466816]
A novel framework for proactive caching is proposed in this paper.
It combines model-based optimization with data-driven techniques by transforming an optimization problem into a grayscale image.
Numerical results show that the proposed scheme can reduce 71.6% computation time with only 0.8% additional performance cost.
arXiv Detail & Related papers (2021-08-15T21:32:47Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.