Multi-objective Pointer Network for Combinatorial Optimization
- URL: http://arxiv.org/abs/2204.11860v1
- Date: Mon, 25 Apr 2022 14:02:34 GMT
- Title: Multi-objective Pointer Network for Combinatorial Optimization
- Authors: Le-yang Gao and Rui Wang and Chuang Liu and Zhao-hong Jia
- Abstract summary: Multi-objective optimization problems (MOCOPs) exist in various real applications.
Deep reinforcement learning (DRL) methods have been proposed to generate approximate optimal solutions to the optimization problems.
This study proposes a single-model deep reinforcement learning framework, called multi-objective Pointer Network (MOPN)
- Score: 10.286195356515355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-objective combinatorial optimization problems (MOCOPs), one type of
complex optimization problems, widely exist in various real applications.
Although meta-heuristics have been successfully applied to address MOCOPs, the
calculation time is often much longer. Recently, a number of deep reinforcement
learning (DRL) methods have been proposed to generate approximate optimal
solutions to the combinatorial optimization problems. However, the existing
studies on DRL have seldom focused on MOCOPs. This study proposes a
single-model deep reinforcement learning framework, called multi-objective
Pointer Network (MOPN), where the input structure of PN is effectively improved
so that the single PN is capable of solving MOCOPs. In addition, two training
strategies, based on representative model and transfer learning, respectively,
are proposed to further enhance the performance of MOPN in different
application scenarios. Moreover, compared to classical meta-heuristics, MOPN
only consumes much less time on forward propagation to obtain the Pareto front.
Meanwhile, MOPN is insensitive to problem scale, meaning that a trained MOPN is
able to address MOCOPs with different scales. To verify the performance of
MOPN, extensive experiments are conducted on three multi-objective traveling
salesman problems, in comparison with one state-of-the-art model DRL-MOA and
three classical multi-objective meta-heuristics. Experimental results
demonstrate that the proposed model outperforms all the comparative methods
with only 20\% to 40\% training time of DRL-MOA.
Related papers
- Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging [111.8456671452411]
Multi-task learning (MTL) leverages a shared model to accomplish multiple tasks and facilitate knowledge transfer.
We propose a Weight-Ensembling Mixture of Experts (WEMoE) method for multi-task model merging.
We show that WEMoE and E-WEMoE outperform state-of-the-art (SOTA) model merging methods in terms of MTL performance, generalization, and robustness.
arXiv Detail & Related papers (2024-10-29T07:16:31Z) - On-the-fly Modulation for Balanced Multimodal Learning [53.616094855778954]
Multimodal learning is expected to boost model performance by integrating information from different modalities.
The widely-used joint training strategy leads to imbalanced and under-optimized uni-modal representations.
We propose On-the-fly Prediction Modulation (OPM) and On-the-fly Gradient Modulation (OGM) strategies to modulate the optimization of each modality.
arXiv Detail & Related papers (2024-10-15T13:15:50Z) - UNCO: Towards Unifying Neural Combinatorial Optimization through Large Language Model [21.232626415696267]
We propose a unified neural optimization framework to solve different types of optimization problems (COPs) by a single model.
We use natural language to formulate text-attributed instances for different COPs and encode them in the same embedding space by the large language model (LLM)
Experiments show that the UNCO model can solve multiple COPs after a single-session training, and achieves satisfactory performance that is comparable to several traditional or learning-based baselines.
arXiv Detail & Related papers (2024-08-22T08:42:44Z) - In Search for Architectures and Loss Functions in Multi-Objective Reinforcement Learning [0.6650227510403052]
Multi-objective reinforcement learning (MORL) is essential for addressing the intricacies of real-world RL problems.
MORL is challenging due to unstable learning dynamics with deep learning-based function approximators.
Our work empirically explores model-free policy learning loss functions and the impact of different architectural choices.
arXiv Detail & Related papers (2024-07-23T19:17:47Z) - Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion [53.33473557562837]
Solving multi-objective optimization problems for large deep neural networks is a challenging task due to the complexity of the loss landscape and the expensive computational cost.
We propose a practical and scalable approach to solve this problem via mixture of experts (MoE) based model fusion.
By ensembling the weights of specialized single-task models, the MoE module can effectively capture the trade-offs between multiple objectives.
arXiv Detail & Related papers (2024-06-14T07:16:18Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - Hybridization of evolutionary algorithm and deep reinforcement learning
for multi-objective orienteering optimization [16.23652137705642]
Multi-objective orienteering problems (MO-OPs) are classical multi-objective routing problems.
This study seeks to solve MO-OPs through a problem-decomposition framework.
arXiv Detail & Related papers (2022-06-21T15:20:42Z) - Pareto Set Learning for Neural Multi-objective Combinatorial
Optimization [6.091096843566857]
Multiobjective optimization (MOCO) problems can be found in many real-world applications.
We develop a learning-based approach to approximate the whole Pareto set for a given MOCO problem without further search procedure.
Our proposed method significantly outperforms some other methods on the multiobjective traveling salesman problem, multiconditioned vehicle routing problem and multi knapsack problem in terms of solution quality, speed, and model efficiency.
arXiv Detail & Related papers (2022-03-29T09:26:22Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Multi-objective Asynchronous Successive Halving [10.632606255280649]
We propose algorithms that extend successive asynchronous halving (ASHA) to the multi-objective (MO) setting.
Our empirical analysis shows that MO ASHA enables to perform MO HPO at scale.
Our algorithms establish new baselines for future research in the area.
arXiv Detail & Related papers (2021-06-23T19:39:31Z) - Provable Multi-Objective Reinforcement Learning with Generative Models [98.19879408649848]
We study the problem of single policy MORL, which learns an optimal policy given the preference of objectives.
Existing methods require strong assumptions such as exact knowledge of the multi-objective decision process.
We propose a new algorithm called model-based envelop value (EVI) which generalizes the enveloped multi-objective $Q$-learning algorithm.
arXiv Detail & Related papers (2020-11-19T22:35:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.