Structure As Search: Unsupervised Permutation Learning for Combinatorial Optimization
- URL: http://arxiv.org/abs/2507.04164v1
- Date: Sat, 05 Jul 2025 21:17:38 GMT
- Title: Structure As Search: Unsupervised Permutation Learning for Combinatorial Optimization
- Authors: Yimeng Min, Carla P. Gomes,
- Abstract summary: We propose a non-autoregressive framework for the Travelling Salesman Problem.<n>By applying a similarity transformation to Hamiltonian cycles, the model learns to approximate permutation via continuous relaxations.
- Score: 30.652280460904375
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a non-autoregressive framework for the Travelling Salesman Problem where solutions emerge directly from learned permutations without explicit search. By applying a similarity transformation to Hamiltonian cycles, the model learns to approximate permutation matrices via continuous relaxations. Our unsupervised approach achieves competitive performance against classical heuristics, demonstrating that the inherent structure of the problem can effectively guide combinatorial optimization without sequential decision-making.
Related papers
- Permutation Picture of Graph Combinatorial Optimization Problems [3.607883549126603]
This paper proposes a framework that formulates a wide range of graph optimization problems using permutation-based representations.
These problems include the travelling salesman problem, maximum independent set, maximum cut, and various other related problems.
arXiv Detail & Related papers (2024-10-22T15:36:04Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Simplifying Momentum-based Positive-definite Submanifold Optimization
with Applications to Deep Learning [24.97120654216651]
We show how to solve difficult differential equations with momentum on a submanifold.
We do so by proposing a generalized version of the Riemannian normal coordinates.
We use our approach to simplify existing approaches for structured covariances and develop matrix-inverse-free $2textnd$orders for deep learning with low precision by using only matrix multiplications.
arXiv Detail & Related papers (2023-02-20T03:31:11Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Branch & Learn for Recursively and Iteratively Solvable Problems in
Predict+Optimize [9.772500430303285]
This paper proposes Branch & Learn, a framework for Predict+ to tackle optimization problems containing parameters that are unknown at the time of solving.
Our framework applies also to iterative algorithms by viewing them as a degenerate form of recursion.
arXiv Detail & Related papers (2022-05-01T08:41:30Z) - Batch Bayesian Optimization on Permutations using Acquisition Weighted
Kernels [86.11176756341114]
We introduce LAW, a new efficient batch acquisition method based on the determinantal point process.
We provide a regret analysis for our method to gain insight in its theoretical properties.
We evaluate the method on several standard problems involving permutations such as quadratic assignment.
arXiv Detail & Related papers (2021-02-26T10:15:57Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.