Edge-wise Topological Divergence Gaps: Guiding Search in Combinatorial Optimization
- URL: http://arxiv.org/abs/2512.15800v1
- Date: Tue, 16 Dec 2025 20:04:25 GMT
- Title: Edge-wise Topological Divergence Gaps: Guiding Search in Combinatorial Optimization
- Authors: Ilya Trofimov, Daria Voronkova, Alexander Mironenko, Anton Dmitriev, Eduard Tulchinskii, Evgeny Burnaev, Serguei Barannikov,
- Abstract summary: We introduce a topological feedback mechanism for the Travelling Salesman Problem (TSP) by analyzing the divergence between a tour and the minimum spanning tree (MST)
- Score: 58.54587318370824
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce a topological feedback mechanism for the Travelling Salesman Problem (TSP) by analyzing the divergence between a tour and the minimum spanning tree (MST). Our key contribution is a canonical decomposition theorem that expresses the tour-MST gap as edge-wise topology-divergence gaps from the RTD-Lite barcode. Based on this, we develop a topological guidance for 2-opt and 3-opt heuristics that increases their performance. We carry out experiments with fine-optimization of tours obtained from heatmap-based methods, TSPLIB, and random instances. Experiments demonstrate the topology-guided optimization results in better performance and faster convergence in many cases.
Related papers
- GTS: Inference-Time Scaling of Latent Reasoning with a Learnable Gaussian Thought Sampler [54.10960908347221]
We model latent thought exploration as conditional sampling from learnable densities and instantiate this idea as a Gaussian Thought Sampler (GTS)<n>GTS predicts context-dependent perturbation distributions over continuous reasoning states and is trained with GRPO-style policy optimization while keeping the backbone frozen.
arXiv Detail & Related papers (2026-02-15T09:57:47Z) - Path Integral Optimiser: Global Optimisation via Neural Schrödinger-Föllmer Diffusion [1.8195082751200438]
We present an early investigation into the use of neural diffusion processes for global optimisation.<n>One can use the Boltzmann distribution to formulate optimization as solving a Schr"odinger bridge sampling problem.<n>We provide theoretical bounds for this optimiser, results on toy tasks, and a summary of the theory motivating the model.
arXiv Detail & Related papers (2025-06-07T14:46:18Z) - Streamlining in the Riemannian Realm: Efficient Riemannian Optimization
with Loopless Variance Reduction [4.578425862931332]
This study focuses on the crucial reduction mechanism used in both Euclidean and Riemannian settings.
Motivated by Euclidean methods, we introduce R-based methods to replace the outer loop with computation triggered by a coin flip.
Using R- as a framework, we demonstrate its applicability to various important settings.
arXiv Detail & Related papers (2024-03-11T12:49:37Z) - Diffusion Stochastic Optimization for Min-Max Problems [33.73046548872663]
The optimistic gradient method is useful in addressing minimax optimization problems.
Motivated by the observation that the conventional version suffers from the need for a large batch size, we introduce and analyze a new formulation termed Samevareps-generativeOGOG.
arXiv Detail & Related papers (2024-01-26T01:16:59Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - DAG Learning on the Permutahedron [33.523216907730216]
We propose a continuous optimization framework for discovering a latent directed acyclic graph (DAG) from observational data.
Our approach optimize over the polytope of permutation vectors, the so-called Permutahedron, to learn a topological ordering.
arXiv Detail & Related papers (2023-01-27T18:22:25Z) - On a class of geodesically convex optimization problems solved via
Euclidean MM methods [50.428784381385164]
We show how a difference of Euclidean convexization functions can be written as a difference of different types of problems in statistics and machine learning.
Ultimately, we helps the broader broader the broader the broader the broader the work.
arXiv Detail & Related papers (2022-06-22T23:57:40Z) - First-Order Algorithms for Min-Max Optimization in Geodesic Metric
Spaces [93.35384756718868]
min-max algorithms have been analyzed in the Euclidean setting.
We prove that the extraiteient (RCEG) method corrected lastrate convergence at a linear rate.
arXiv Detail & Related papers (2022-06-04T18:53:44Z) - Averaging on the Bures-Wasserstein manifold: dimension-free convergence
of gradient descent [15.136397170510834]
We prove new geodesic convexity results which provide stronger control of the iterates, a free convergence.
Our techniques also enable the analysis of two related notions of averaging, the entropically-regularized barycenter and the geometric median.
arXiv Detail & Related papers (2021-06-16T01:05:19Z) - Intermediate Layer Optimization for Inverse Problems using Deep
Generative Models [86.29330440222199]
ILO is a novel optimization algorithm for solving inverse problems with deep generative models.
We empirically show that our approach outperforms state-of-the-art methods introduced in StyleGAN-2 and PULSE for a wide range of inverse problems.
arXiv Detail & Related papers (2021-02-15T06:52:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.