Structure As Search: Unsupervised Permutation Learning for Combinatorial Optimization
- URL: http://arxiv.org/abs/2507.04164v3
- Date: Wed, 24 Sep 2025 16:36:27 GMT
- Title: Structure As Search: Unsupervised Permutation Learning for Combinatorial Optimization
- Authors: Yimeng Min, Carla P. Gomes,
- Abstract summary: We propose a non-autoregressive framework for the Travelling Salesman Problem.<n>By applying a similarity transformation to Hamiltonian cycles, the model learns to approximate permutation via continuous relaxations.
- Score: 22.51828421737954
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a non-autoregressive framework for the Travelling Salesman Problem where solutions emerge directly from learned permutations, without requiring explicit search. By applying a similarity transformation to Hamiltonian cycles, the model learns to approximate permutation matrices via continuous relaxations. Our unsupervised approach achieves competitive performance against classical heuristics, demonstrating that the inherent structure of the problem can effectively guide combinatorial optimization without sequential decision-making. Our method offers concrete evidence that neural networks can directly capture and exploit combinatorial structure.
Related papers
- Graph Neural Networks are Heuristics [22.51828421737954]
We show that encoding global structure as an inductive bias enables a non-autoregressive model to generate solutions without search, supervision, or sequential decision-making.<n>Our results establish that graph neural networks do not internalize global structure and explicit search to be effective.
arXiv Detail & Related papers (2026-01-19T23:40:08Z) - Geometric Algorithms for Neural Combinatorial Optimization with Constraints [46.17172939797694]
Self-Supervised Learning for Combinatorial Optimization (CO) is an emerging paradigm for solving problems using neural networks.<n>We design an end-to-end differentiable framework that enables us to solve discrete constrained optimization problems with neural networks.
arXiv Detail & Related papers (2025-10-28T03:49:01Z) - Permutation Picture of Graph Combinatorial Optimization Problems [3.607883549126603]
This paper proposes a framework that formulates a wide range of graph optimization problems using permutation-based representations.
These problems include the travelling salesman problem, maximum independent set, maximum cut, and various other related problems.
arXiv Detail & Related papers (2024-10-22T15:36:04Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Simplifying Momentum-based Positive-definite Submanifold Optimization
with Applications to Deep Learning [24.97120654216651]
We show how to solve difficult differential equations with momentum on a submanifold.
We do so by proposing a generalized version of the Riemannian normal coordinates.
We use our approach to simplify existing approaches for structured covariances and develop matrix-inverse-free $2textnd$orders for deep learning with low precision by using only matrix multiplications.
arXiv Detail & Related papers (2023-02-20T03:31:11Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Neural Combinatorial Optimization: a New Player in the Field [69.23334811890919]
This paper presents a critical analysis on the incorporation of algorithms based on neural networks into the classical optimization framework.
A comprehensive study is carried out to analyse the fundamental aspects of such algorithms, including performance, transferability, computational cost and to larger-sized instances.
arXiv Detail & Related papers (2022-05-03T07:54:56Z) - Branch & Learn for Recursively and Iteratively Solvable Problems in
Predict+Optimize [9.772500430303285]
This paper proposes Branch & Learn, a framework for Predict+ to tackle optimization problems containing parameters that are unknown at the time of solving.
Our framework applies also to iterative algorithms by viewing them as a degenerate form of recursion.
arXiv Detail & Related papers (2022-05-01T08:41:30Z) - Batch Bayesian Optimization on Permutations using Acquisition Weighted
Kernels [86.11176756341114]
We introduce LAW, a new efficient batch acquisition method based on the determinantal point process.
We provide a regret analysis for our method to gain insight in its theoretical properties.
We evaluate the method on several standard problems involving permutations such as quadratic assignment.
arXiv Detail & Related papers (2021-02-26T10:15:57Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.