Meta-Learning-based Deep Reinforcement Learning for Multiobjective
Optimization Problems
- URL: http://arxiv.org/abs/2105.02741v1
- Date: Thu, 6 May 2021 15:09:35 GMT
- Title: Meta-Learning-based Deep Reinforcement Learning for Multiobjective
Optimization Problems
- Authors: Zizhen Zhang, Zhiyuan Wu, Jiahai Wang
- Abstract summary: This paper proposes a concise meta-learning-based DRL approach.
It first trains a meta-model by meta-learning.
The meta-model is fine-tuned with a few update steps to derive submodels for the corresponding subproblems.
- Score: 11.478548460936837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep reinforcement learning (DRL) has recently shown its success in tackling
complex combinatorial optimization problems. When these problems are extended
to multiobjective ones, it becomes difficult for the existing DRL approaches to
flexibly and efficiently deal with multiple subproblems determined by weight
decomposition of objectives. This paper proposes a concise meta-learning-based
DRL approach. It first trains a meta-model by meta-learning. The meta-model is
fine-tuned with a few update steps to derive submodels for the corresponding
subproblems. The Pareto front is built accordingly. The computational
experiments on multiobjective traveling salesman problems demonstrate the
superiority of our method over most of learning-based and iteration-based
approaches.
Related papers
- Enhancing Multi-Step Reasoning Abilities of Language Models through Direct Q-Function Optimization [50.485788083202124]
Reinforcement Learning (RL) plays a crucial role in aligning large language models with human preferences and improving their ability to perform complex tasks.
We introduce Direct Q-function Optimization (DQO), which formulates the response generation process as a Markov Decision Process (MDP) and utilizes the soft actor-critic (SAC) framework to optimize a Q-function directly parameterized by the language model.
Experimental results on two math problem-solving datasets, GSM8K and MATH, demonstrate that DQO outperforms previous methods, establishing it as a promising offline reinforcement learning approach for aligning language models.
arXiv Detail & Related papers (2024-10-11T23:29:20Z) - Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion [53.33473557562837]
Solving multi-objective optimization problems for large deep neural networks is a challenging task due to the complexity of the loss landscape and the expensive computational cost.
We propose a practical and scalable approach to solve this problem via mixture of experts (MoE) based model fusion.
By ensembling the weights of specialized single-task models, the MoE module can effectively capture the trade-offs between multiple objectives.
arXiv Detail & Related papers (2024-06-14T07:16:18Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - Efficient Meta Neural Heuristic for Multi-Objective Combinatorial
Optimization [35.09656455088854]
We propose an efficient meta neural vector (EMNH) to solve multi-objective optimization problems.
EMNH is able to outperform the state-of-the-art neurals in terms of solution quality and learning efficiency.
arXiv Detail & Related papers (2023-10-22T08:59:02Z) - A Survey of Meta-Reinforcement Learning [69.76165430793571]
We cast the development of better RL algorithms as a machine learning problem itself in a process called meta-RL.
We discuss how, at a high level, meta-RL research can be clustered based on the presence of a task distribution and the learning budget available for each individual task.
We conclude by presenting the open problems on the path to making meta-RL part of the standard toolbox for a deep RL practitioner.
arXiv Detail & Related papers (2023-01-19T12:01:41Z) - DIMES: A Differentiable Meta Solver for Combinatorial Optimization
Problems [41.57773395100222]
Deep reinforcement learning (DRL) models have shown promising results in solving NP-hard Combinatorial Optimization problems.
This paper addresses the scalability challenge in large-scale optimization by proposing a novel approach, namely, DIMES.
Unlike previous DRL methods which suffer from costly autoregressive decoding or iterative refinements of discrete solutions, DIMES introduces a compact continuous space for parameterizing the underlying distribution of candidate solutions.
Extensive experiments show that DIMES outperforms recent DRL-based methods on large benchmark datasets for Traveling Salesman Problems and Maximal Independent Set problems.
arXiv Detail & Related papers (2022-10-08T23:24:37Z) - MODRL/D-EL: Multiobjective Deep Reinforcement Learning with Evolutionary
Learning for Multiobjective Optimization [10.614594804236893]
This paper proposes a multiobjective deep reinforcement learning with evolutionary learning algorithm for a typical complex problem called the multiobjective vehicle routing problem with time windows.
The experimental results on MO-VRPTW instances demonstrate the superiority of the proposed algorithm over other learning-based and iterative-based approaches.
arXiv Detail & Related papers (2021-07-16T15:22:20Z) - Meta-Learning with Neural Tangent Kernels [58.06951624702086]
We propose the first meta-learning paradigm in the Reproducing Kernel Hilbert Space (RKHS) induced by the meta-model's Neural Tangent Kernel (NTK)
Within this paradigm, we introduce two meta-learning algorithms, which no longer need a sub-optimal iterative inner-loop adaptation as in the MAML framework.
We achieve this goal by 1) replacing the adaptation with a fast-adaptive regularizer in the RKHS; and 2) solving the adaptation analytically based on the NTK theory.
arXiv Detail & Related papers (2021-02-07T20:53:23Z) - Provable Multi-Objective Reinforcement Learning with Generative Models [98.19879408649848]
We study the problem of single policy MORL, which learns an optimal policy given the preference of objectives.
Existing methods require strong assumptions such as exact knowledge of the multi-objective decision process.
We propose a new algorithm called model-based envelop value (EVI) which generalizes the enveloped multi-objective $Q$-learning algorithm.
arXiv Detail & Related papers (2020-11-19T22:35:31Z) - MODRL/D-AM: Multiobjective Deep Reinforcement Learning Algorithm Using
Decomposition and Attention Model for Multiobjective Optimization [15.235261981563523]
This paper proposes a multiobjective deep reinforcement learning method to solve multiobjective optimization problem.
In our method, each subproblem is solved by an attention model, which can exploit the structure features as well as node features of input nodes.
arXiv Detail & Related papers (2020-02-13T12:59:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.