Learning Lagrangian Multipliers for the Travelling Salesman Problem
- URL: http://arxiv.org/abs/2312.14836v1
- Date: Fri, 22 Dec 2023 17:09:34 GMT
- Title: Learning Lagrangian Multipliers for the Travelling Salesman Problem
- Authors: Augustin Parjadis, Quentin Cappart, Bistra Dilkina, Aaron Ferber,
Louis-Martin Rousseau
- Abstract summary: We propose an innovative unsupervised learning approach that harnesses the capabilities of graph neural networks to exploit the problem structure.
We apply this technique to the well-known Held-Karp Lagrangian relaxation for the travelling salesman problem.
In contrast to much of the existing literature, which primarily focuses on finding feasible solutions, our approach operates on the dual side, demonstrating that learning can also accelerate the proof of optimality.
- Score: 12.968608204035611
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lagrangian relaxation is a versatile mathematical technique employed to relax
constraints in an optimization problem, enabling the generation of dual bounds
to prove the optimality of feasible solutions and the design of efficient
propagators in constraint programming (such as the weighted circuit
constraint). However, the conventional process of deriving Lagrangian
multipliers (e.g., using subgradient methods) is often computationally
intensive, limiting its practicality for large-scale or time-sensitive
problems. To address this challenge, we propose an innovative unsupervised
learning approach that harnesses the capabilities of graph neural networks to
exploit the problem structure, aiming to generate accurate Lagrangian
multipliers efficiently. We apply this technique to the well-known Held-Karp
Lagrangian relaxation for the travelling salesman problem. The core idea is to
predict accurate Lagrangian multipliers and to employ them as a warm start for
generating Held-Karp relaxation bounds. These bounds are subsequently utilized
to enhance the filtering process carried out by branch-and-bound algorithms. In
contrast to much of the existing literature, which primarily focuses on finding
feasible solutions, our approach operates on the dual side, demonstrating that
learning can also accelerate the proof of optimality. We conduct experiments
across various distributions of the metric travelling salesman problem,
considering instances with up to 200 cities. The results illustrate that our
approach can improve the filtering level of the weighted circuit global
constraint, reduce the optimality gap by a factor two for unsolved instances up
to a timeout, and reduce the execution time for solved instances by 10%.
Related papers
- Learning Valid Dual Bounds in Constraint Programming: Boosted Lagrangian Decomposition with Self-Supervised Learning [2.8425837800129696]
Lagrangian decomposition (LD) is a relaxation method that provides a dual bound for constrained optimization problems.
This work presents the first generic method for learning valid dual bounds in constraint programming.
arXiv Detail & Related papers (2024-08-22T19:26:29Z) - Near-Optimal Solutions of Constrained Learning Problems [85.48853063302764]
In machine learning systems, the need to curtail their behavior has become increasingly apparent.
This is evidenced by recent advancements towards developing models that satisfy dual robustness variables.
Our results show that rich parametrizations effectively mitigate non-dimensional, finite learning problems.
arXiv Detail & Related papers (2024-03-18T14:55:45Z) - Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - Predicting Accurate Lagrangian Multipliers for Mixed Integer Linear Programs [2.6899003881035406]
We introduce a deep learning approach that bypasses the descent, effectively amortizing the local, per instance, optimization.
We show that our approach closes up to 85% of the gap between the continuous relaxation and the best Lagrangian bound, and provides a high quality warm-start for descent based Lagrangian methods.
arXiv Detail & Related papers (2023-10-23T07:53:47Z) - Neural Fields with Hard Constraints of Arbitrary Differential Order [61.49418682745144]
We develop a series of approaches for enforcing hard constraints on neural fields.
The constraints can be specified as a linear operator applied to the neural field and its derivatives.
Our approaches are demonstrated in a wide range of real-world applications.
arXiv Detail & Related papers (2023-06-15T08:33:52Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - Learning to Optimize with Stochastic Dominance Constraints [103.26714928625582]
In this paper, we develop a simple yet efficient approach for the problem of comparing uncertain quantities.
We recast inner optimization in the Lagrangian as a learning problem for surrogate approximation, which bypasses apparent intractability.
The proposed light-SD demonstrates superior performance on several representative problems ranging from finance to supply chain management.
arXiv Detail & Related papers (2022-11-14T21:54:31Z) - A Stochastic Composite Augmented Lagrangian Method For Reinforcement
Learning [9.204659134755795]
We consider the linear programming (LP) formulation for deep reinforcement learning.
The augmented Lagrangian method suffers the double-sampling obstacle in solving the LP.
A deep parameterized augment Lagrangian method is proposed.
arXiv Detail & Related papers (2021-05-20T13:08:06Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.