Accelerating the Genetic Algorithm for Large-scale Traveling Salesman
Problems by Cooperative Coevolutionary Pointer Network with Reinforcement
Learning
- URL: http://arxiv.org/abs/2209.13077v1
- Date: Tue, 27 Sep 2022 00:06:04 GMT
- Title: Accelerating the Genetic Algorithm for Large-scale Traveling Salesman
Problems by Cooperative Coevolutionary Pointer Network with Reinforcement
Learning
- Authors: Rui Zhong and Enzhi Zhang and Masaharu Munetomo
- Abstract summary: We propose a two-stage optimization strategy for solving the Large-scale Traveling Salesman Problems (LSTSPs) named CCPNRL-GA.
First, we hypothesize that the participation of a well-performed individual as an elite can accelerate the convergence of optimization.
We validate the performance of our proposal on 10 LSTSPs and compare it with traditional EAs.
- Score: 2.7716102039510564
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a two-stage optimization strategy for solving the
Large-scale Traveling Salesman Problems (LSTSPs) named CCPNRL-GA. First, we
hypothesize that the participation of a well-performed individual as an elite
can accelerate the convergence of optimization. Based on this hypothesis, in
the first stage, we cluster the cities and decompose the LSTSPs into multiple
subcomponents, and each subcomponent is optimized with a reusable Pointer
Network (PtrNet). After subcomponents optimization, we combine all sub-tours to
form a valid solution, this solution joins the second stage of optimization
with GA. We validate the performance of our proposal on 10 LSTSPs and compare
it with traditional EAs. Experimental results show that the participation of an
elite individual can greatly accelerate the optimization of LSTSPs, and our
proposal has broad prospects for dealing with LSTSPs.
Related papers
- CSF: Fixed-outline Floorplanning Based on the Conjugate Subgradient Algorithm Assisted by Q-Learning [8.771277942456972]
Experimental results demonstrate that the CSA assisted by Q-learning (CSAQ) can address both global floorplanning and legalization efficiently.
The two stages jointly contribute to competitive results on the optimization of wirelength.
arXiv Detail & Related papers (2025-04-04T04:01:26Z) - PSO and the Traveling Salesman Problem: An Intelligent Optimization Approach [0.0]
The Traveling Salesman Problem (TSP) is an optimization problem that aims to find the shortest possible route that visits each city exactly once and returns to the starting point.
This paper explores the application of Particle Swarm Optimization (PSO), a population-based optimization algorithm, to solve TSP.
arXiv Detail & Related papers (2025-01-25T20:21:31Z) - DualOpt: A Dual Divide-and-Optimize Algorithm for the Large-scale Traveling Salesman Problem [16.841700899374654]
The paper proposes a dual divide-and-optimize algorithm (DualOpt) for solving the large-scale traveling salesman problem (T)
DualOpt combines two complementary strategies to improve both solution quality and computational efficiency.
The proposed DualOpt achieves highly competitive results compared to 10 state-of-the-art algorithms in the literature.
arXiv Detail & Related papers (2025-01-15T04:16:28Z) - LLM as a Complementary Optimizer to Gradient Descent: A Case Study in Prompt Tuning [69.95292905263393]
We show that gradient-based and high-level LLMs can effectively collaborate a combined optimization framework.
In this paper, we show that these complementary to each other and can effectively collaborate a combined optimization framework.
arXiv Detail & Related papers (2024-05-30T06:24:14Z) - Ant Colony Sampling with GFlowNets for Combinatorial Optimization [68.84985459701007]
Generative Flow Ant Colony Sampler (GFACS) is a novel meta-heuristic method that hierarchically combines amortized inference and parallel search.
Our method first leverages Generative Flow Networks (GFlowNets) to amortize a multi-modal prior distribution over a solution space.
arXiv Detail & Related papers (2024-03-11T16:26:06Z) - Controllable Prompt Tuning For Balancing Group Distributional Robustness [53.336515056479705]
We introduce an optimization scheme to achieve good performance across groups and find a good solution for all without severely sacrificing performance on any of them.
We propose Controllable Prompt Tuning (CPT), which couples our approach with prompt-tuning techniques.
On spurious correlation benchmarks, our procedures achieve state-of-the-art results across both transformer and non-transformer architectures, as well as unimodal and multimodal data.
arXiv Detail & Related papers (2024-03-05T06:23:55Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Constrained Bayesian Optimization Under Partial Observations: Balanced
Improvements and Provable Convergence [6.461785985849886]
We endeavor to design an efficient and provable method for expensive POCOPs under the framework of constrained Bayesian optimization.
We present an improved design of the acquisition functions that introduces balanced exploration during optimization.
We propose a Gaussian process embedding different likelihoods as the surrogate model for a partially observable constraint.
arXiv Detail & Related papers (2023-12-06T01:00:07Z) - Bidirectional Looking with A Novel Double Exponential Moving Average to
Adaptive and Non-adaptive Momentum Optimizers [109.52244418498974]
We propose a novel textscAdmeta (textbfADouble exponential textbfMov averagtextbfE textbfAdaptive and non-adaptive momentum) framework.
We provide two implementations, textscAdmetaR and textscAdmetaS, the former based on RAdam and the latter based on SGDM.
arXiv Detail & Related papers (2023-07-02T18:16:06Z) - Accelerating the Evolutionary Algorithms by Gaussian Process Regression
with $\epsilon$-greedy acquisition function [2.7716102039510564]
We propose a novel method to estimate the elite individual to accelerate the convergence of optimization.
Our proposal has a broad prospect to estimate the elite individual and accelerate the convergence of optimization.
arXiv Detail & Related papers (2022-10-13T07:56:47Z) - LinEasyBO: Scalable Bayesian Optimization Approach for Analog Circuit
Synthesis via One-Dimensional Subspaces [11.64233949999656]
We propose a fast and robust Bayesian optimization approach via one-dimensional subspaces for analog circuit synthesis.
Our proposed algorithm can accelerate the optimization procedure by up to 9x and 38x compared to LP-EI and REMBOpBO respectively when the batch size is 15.
arXiv Detail & Related papers (2021-09-01T21:25:25Z) - An Efficient Batch Constrained Bayesian Optimization Approach for Analog
Circuit Synthesis via Multi-objective Acquisition Ensemble [11.64233949999656]
We propose an efficient parallelizable Bayesian optimization algorithm via Multi-objective ACquisition function Ensemble (MACE)
Our proposed algorithm can reduce the overall simulation time by up to 74 times compared to differential evolution (DE) for the unconstrained optimization problem when the batch size is 15.
For the constrained optimization problem, our proposed algorithm can speed up the optimization process by up to 15 times compared to the weighted expected improvement based Bayesian optimization (WEIBO) approach, when the batch size is 15.
arXiv Detail & Related papers (2021-06-28T13:21:28Z) - Exploration in two-stage recommender systems [79.50534282841618]
Two-stage recommender systems are widely adopted in industry due to their scalability and maintainability.
A key challenge of this setup is that optimal performance of each stage in isolation does not imply optimal global performance.
We propose a method of synchronising the exploration strategies between the ranker and the nominators.
arXiv Detail & Related papers (2020-09-01T16:52:51Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Generalized Self-Adapting Particle Swarm Optimization algorithm with
archive of samples [0.0]
The paper introduces a new version of the algorithm, abbreviated as M-GAPSO.
In comparison with the original GAPSO formulation it includes the following four features: a global restart management scheme, samples gathering within an R-Tree based index, adaptation of a sampling behavior based on a global particle performance, and a specific approach to local search.
arXiv Detail & Related papers (2020-02-28T00:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.