Noise Distribution Decomposition based Multi-Agent Distributional
Reinforcement Learning
- URL: http://arxiv.org/abs/2312.07025v1
- Date: Tue, 12 Dec 2023 07:24:15 GMT
- Title: Noise Distribution Decomposition based Multi-Agent Distributional
Reinforcement Learning
- Authors: Wei Geng, Baidi Xiao, Rongpeng Li, Ning Wei, Dong Wang, and Zhifeng
Zhao
- Abstract summary: Multi-Agent Reinforcement Learning (MARL) is more susceptible to noise due to the interference among intelligent agents.
We propose a novel decomposition-based multi-agent distributional RL method by approxing the globally shared noisy reward.
We also verify the effectiveness of the proposed method through extensive simulation experiments with noisy rewards.
- Score: 15.82785057592436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generally, Reinforcement Learning (RL) agent updates its policy by
repetitively interacting with the environment, contingent on the received
rewards to observed states and undertaken actions. However, the environmental
disturbance, commonly leading to noisy observations (e.g., rewards and states),
could significantly shape the performance of agent. Furthermore, the learning
performance of Multi-Agent Reinforcement Learning (MARL) is more susceptible to
noise due to the interference among intelligent agents. Therefore, it becomes
imperative to revolutionize the design of MARL, so as to capably ameliorate the
annoying impact of noisy rewards. In this paper, we propose a novel
decomposition-based multi-agent distributional RL method by approximating the
globally shared noisy reward by a Gaussian mixture model (GMM) and decomposing
it into the combination of individual distributional local rewards, with which
each agent can be updated locally through distributional RL. Moreover, a
diffusion model (DM) is leveraged for reward generation in order to mitigate
the issue of costly interaction expenditure for learning distributions.
Furthermore, the optimality of the distribution decomposition is theoretically
validated, while the design of loss function is carefully calibrated to avoid
the decomposition ambiguity. We also verify the effectiveness of the proposed
method through extensive simulation experiments with noisy rewards. Besides,
different risk-sensitive policies are evaluated in order to demonstrate the
superiority of distributional RL in different MARL tasks.
Related papers
- Action Gaps and Advantages in Continuous-Time Distributional Reinforcement Learning [30.64409258999151]
We show that action-conditioned return distributions collapse to their underlying policy's return distribution as the decision frequency increases.
We also introduce the superiority as a probabilistic generalization of the advantage.
Through simulations in an option-trading domain, we validate that proper modeling of the superiority distribution produces improved controllers at high decision frequencies.
arXiv Detail & Related papers (2024-10-14T19:18:38Z) - ComaDICE: Offline Cooperative Multi-Agent Reinforcement Learning with Stationary Distribution Shift Regularization [11.620274237352026]
offline reinforcement learning (RL) has garnered significant attention for its ability to learn effective policies from pre-collected datasets.
MARL presents additional challenges due to the large joint state-action space and the complexity of multi-agent behaviors.
We introduce a regularizer in the space of stationary distributions to better handle distributional shift.
arXiv Detail & Related papers (2024-10-02T18:56:10Z) - MetaRM: Shifted Distributions Alignment via Meta-Learning [52.94381279744458]
Reinforcement Learning from Human Feedback (RLHF) in language model alignment is critically dependent on the capability of the reward model (RM)
We introduce MetaRM, a method leveraging meta-learning to align the RM with the shifted environment distribution.
Extensive experiments demonstrate that MetaRM significantly improves the RM's distinguishing ability in iterative RLHF optimization.
arXiv Detail & Related papers (2024-05-01T10:43:55Z) - Discrete Probabilistic Inference as Control in Multi-path Environments [84.67055173040107]
We consider the problem of sampling from a discrete and structured distribution as a sequential decision problem.
We show that GFlowNets learn a policy that samples objects proportionally to their reward by enforcing a conservation of flows.
We also prove that some flow-matching objectives found in the GFlowNet literature are in fact equivalent to well-established MaxEnt RL algorithms with a corrected reward.
arXiv Detail & Related papers (2024-02-15T20:20:35Z) - AlberDICE: Addressing Out-Of-Distribution Joint Actions in Offline
Multi-Agent RL via Alternating Stationary Distribution Correction Estimation [65.4532392602682]
One of the main challenges in offline Reinforcement Learning (RL) is the distribution shift that arises from the learned policy deviating from the data collection policy.
This is often addressed by avoiding out-of-distribution (OOD) actions during policy improvement as their presence can lead to substantial performance degradation.
We introduce AlberDICE, an offline MARL algorithm that performs centralized training of individual agents based on stationary distribution optimization.
arXiv Detail & Related papers (2023-11-03T18:56:48Z) - Policy Evaluation in Distributional LQR [70.63903506291383]
We provide a closed-form expression of the distribution of the random return.
We show that this distribution can be approximated by a finite number of random variables.
Using the approximate return distribution, we propose a zeroth-order policy gradient algorithm for risk-averse LQR.
arXiv Detail & Related papers (2023-03-23T20:27:40Z) - Distributional Reinforcement Learning for Multi-Dimensional Reward
Functions [91.88969237680669]
We introduce Multi-Dimensional Distributional DQN (MD3QN) to model the joint return distribution from multiple reward sources.
As a by-product of joint distribution modeling, MD3QN can capture the randomness in returns for each source of reward.
In experiments, our method accurately models the joint return distribution in environments with richly correlated reward functions.
arXiv Detail & Related papers (2021-10-26T11:24:23Z) - Global Distance-distributions Separation for Unsupervised Person
Re-identification [93.39253443415392]
Existing unsupervised ReID approaches often fail in correctly identifying the positive samples and negative samples through the distance-based matching/ranking.
We introduce a global distance-distributions separation constraint over the two distributions to encourage the clear separation of positive and negative samples from a global view.
We show that our method leads to significant improvement over the baselines and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-06-01T07:05:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.