Hybridization of evolutionary algorithm and deep reinforcement learning
for multi-objective orienteering optimization
- URL: http://arxiv.org/abs/2206.10464v1
- Date: Tue, 21 Jun 2022 15:20:42 GMT
- Title: Hybridization of evolutionary algorithm and deep reinforcement learning
for multi-objective orienteering optimization
- Authors: Wei Liu, Rui Wang, Tao Zhang, Kaiwen Li, Wenhua Li and Hisao Ishibuchi
- Abstract summary: Multi-objective orienteering problems (MO-OPs) are classical multi-objective routing problems.
This study seeks to solve MO-OPs through a problem-decomposition framework.
- Score: 16.23652137705642
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-objective orienteering problems (MO-OPs) are classical multi-objective
routing problems and have received a lot of attention in the past decades. This
study seeks to solve MO-OPs through a problem-decomposition framework, that is,
a MO-OP is decomposed into a multi-objective knapsack problem (MOKP) and a
travelling salesman problem (TSP). The MOKP and TSP are then solved by a
multi-objective evolutionary algorithm (MOEA) and a deep reinforcement learning
(DRL) method, respectively. While the MOEA module is for selecting cities, the
DRL module is for planning a Hamiltonian path for these cities. An iterative
use of these two modules drives the population towards the Pareto front of
MO-OPs. The effectiveness of the proposed method is compared against NSGA-II
and NSGA-III on various types of MO-OP instances. Experimental results show
that our method exhibits the best performance on almost all the test instances,
and has shown strong generalization ability.
Related papers
- Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging [111.8456671452411]
Multi-task learning (MTL) leverages a shared model to accomplish multiple tasks and facilitate knowledge transfer.
We propose a Weight-Ensembling Mixture of Experts (WEMoE) method for multi-task model merging.
We show that WEMoE and E-WEMoE outperform state-of-the-art (SOTA) model merging methods in terms of MTL performance, generalization, and robustness.
arXiv Detail & Related papers (2024-10-29T07:16:31Z) - LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level Mathematical Reasoning [56.273799410256075]
The framework combines Monte Carlo Tree Search (MCTS) with iterative Self-Refine to optimize the reasoning path.
The framework has been tested on general and advanced benchmarks, showing superior performance in terms of search efficiency and problem-solving capability.
arXiv Detail & Related papers (2024-10-03T18:12:29Z) - In Search for Architectures and Loss Functions in Multi-Objective Reinforcement Learning [0.6650227510403052]
Multi-objective reinforcement learning (MORL) is essential for addressing the intricacies of real-world RL problems.
MORL is challenging due to unstable learning dynamics with deep learning-based function approximators.
Our work empirically explores model-free policy learning loss functions and the impact of different architectural choices.
arXiv Detail & Related papers (2024-07-23T19:17:47Z) - Multiobjective Vehicle Routing Optimization with Time Windows: A Hybrid Approach Using Deep Reinforcement Learning and NSGA-II [52.083337333478674]
This paper proposes a weight-aware deep reinforcement learning (WADRL) approach designed to address the multiobjective vehicle routing problem with time windows (MOVRPTW)
The Non-dominated sorting genetic algorithm-II (NSGA-II) method is then employed to optimize the outcomes produced by the WADRL.
arXiv Detail & Related papers (2024-07-18T02:46:06Z) - Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion [53.33473557562837]
Solving multi-objective optimization problems for large deep neural networks is a challenging task due to the complexity of the loss landscape and the expensive computational cost.
We propose a practical and scalable approach to solve this problem via mixture of experts (MoE) based model fusion.
By ensembling the weights of specialized single-task models, the MoE module can effectively capture the trade-offs between multiple objectives.
arXiv Detail & Related papers (2024-06-14T07:16:18Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - Multi-objective Pointer Network for Combinatorial Optimization [10.286195356515355]
Multi-objective optimization problems (MOCOPs) exist in various real applications.
Deep reinforcement learning (DRL) methods have been proposed to generate approximate optimal solutions to the optimization problems.
This study proposes a single-model deep reinforcement learning framework, called multi-objective Pointer Network (MOPN)
arXiv Detail & Related papers (2022-04-25T14:02:34Z) - Pareto Set Learning for Neural Multi-objective Combinatorial
Optimization [6.091096843566857]
Multiobjective optimization (MOCO) problems can be found in many real-world applications.
We develop a learning-based approach to approximate the whole Pareto set for a given MOCO problem without further search procedure.
Our proposed method significantly outperforms some other methods on the multiobjective traveling salesman problem, multiconditioned vehicle routing problem and multi knapsack problem in terms of solution quality, speed, and model efficiency.
arXiv Detail & Related papers (2022-03-29T09:26:22Z) - Provable Multi-Objective Reinforcement Learning with Generative Models [98.19879408649848]
We study the problem of single policy MORL, which learns an optimal policy given the preference of objectives.
Existing methods require strong assumptions such as exact knowledge of the multi-objective decision process.
We propose a new algorithm called model-based envelop value (EVI) which generalizes the enveloped multi-objective $Q$-learning algorithm.
arXiv Detail & Related papers (2020-11-19T22:35:31Z) - Decomposition in Decision and Objective Space for Multi-Modal
Multi-Objective Optimization [15.681236469530397]
Multi-modal multi-objective optimization problems (MMMOPs) have multiple subsets within the Pareto-optimal Set.
Prevalent multi-objective evolutionary algorithms are not purely designed to search for multiple solution subsets, whereas, algorithms designed for MMMOPs demonstrate degraded performance in the objective space.
This motivates the design of better algorithms for addressing MMMOPs.
arXiv Detail & Related papers (2020-06-04T03:18:47Z) - Hybrid Adaptive Evolutionary Algorithm for Multi-objective Optimization [0.0]
This paper proposed a new Multi-objective Algorithm as an extension of the Hybrid Adaptive Evolutionary algorithm (HAEA) called MoHAEA.
MoHAEA is compared with four states of the art MOEAs, namely MOEA/D, pa$lambda$-MOEA/D, MOEA/D-AWA, and NSGA-II.
arXiv Detail & Related papers (2020-04-29T02:16:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.