Achieving Collective Welfare in Multi-Agent Reinforcement Learning via Suggestion Sharing
- URL: http://arxiv.org/abs/2412.12326v1
- Date: Mon, 16 Dec 2024 19:44:44 GMT
- Title: Achieving Collective Welfare in Multi-Agent Reinforcement Learning via Suggestion Sharing
- Authors: Yue Jin, Shuangqing Wei, Giovanni Montana,
- Abstract summary: Conflict between self-interest and collective well-being often obstructs efforts to achieve shared welfare.
We propose using multi-agent reinforcement learning (MARL) to address this issue.
Traditional MARL solutions involve sharing rewards, values, and policies or designing intrinsic rewards to encourage agents to learn collectively optimal policies.
We introduce a novel MARL approach based on Suggestion Sharing (SS), where agents exchange only action suggestions.
- Score: 12.167248367980449
- License:
- Abstract: In human society, the conflict between self-interest and collective well-being often obstructs efforts to achieve shared welfare. Related concepts like the Tragedy of the Commons and Social Dilemmas frequently manifest in our daily lives. As artificial agents increasingly serve as autonomous proxies for humans, we propose using multi-agent reinforcement learning (MARL) to address this issue - learning policies to maximise collective returns even when individual agents' interests conflict with the collective one. Traditional MARL solutions involve sharing rewards, values, and policies or designing intrinsic rewards to encourage agents to learn collectively optimal policies. We introduce a novel MARL approach based on Suggestion Sharing (SS), where agents exchange only action suggestions. This method enables effective cooperation without the need to design intrinsic rewards, achieving strong performance while revealing less private information compared to sharing rewards, values, or policies. Our theoretical analysis establishes a bound on the discrepancy between collective and individual objectives, demonstrating how sharing suggestions can align agents' behaviours with the collective objective. Experimental results demonstrate that SS performs competitively with baselines that rely on value or policy sharing or intrinsic rewards.
Related papers
- Learning to Balance Altruism and Self-interest Based on Empathy in Mixed-Motive Games [47.8980880888222]
Multi-agent scenarios often involve mixed motives, demanding altruistic agents capable of self-protection against potential exploitation.
We propose LASE Learning to balance Altruism and Self-interest based on Empathy.
LASE allocates a portion of its rewards to co-players as gifts, with this allocation adapting dynamically based on the social relationship.
arXiv Detail & Related papers (2024-10-10T12:30:56Z) - Beyond Joint Demonstrations: Personalized Expert Guidance for Efficient Multi-Agent Reinforcement Learning [54.40927310957792]
We introduce a novel concept of personalized expert demonstrations, tailored for each individual agent or, more broadly, each individual type of agent within a heterogeneous team.
These demonstrations solely pertain to single-agent behaviors and how each agent can achieve personal goals without encompassing any cooperative elements.
We propose an approach that selectively utilizes personalized expert demonstrations as guidance and allows agents to learn to cooperate.
arXiv Detail & Related papers (2024-03-13T20:11:20Z) - Learning Roles with Emergent Social Value Orientations [49.16026283952117]
This paper introduces the typical "division of labor or roles" mechanism in human society.
We provide a promising solution for intertemporal social dilemmas (ISD) with social value orientations (SVO)
A novel learning framework, called Learning Roles with Emergent SVOs (RESVO), is proposed to transform the learning of roles into the social value orientation emergence.
arXiv Detail & Related papers (2023-01-31T17:54:09Z) - Iterated Reasoning with Mutual Information in Cooperative and Byzantine
Decentralized Teaming [0.0]
We show that reformulating an agent's policy to be conditional on the policies of its teammates inherently maximizes Mutual Information (MI) lower-bound when optimizing under Policy Gradient (PG)
Our approach, InfoPG, outperforms baselines in learning emergent collaborative behaviors and sets the state-of-the-art in decentralized cooperative MARL tasks.
arXiv Detail & Related papers (2022-01-20T22:54:32Z) - MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced
Active Learning [14.06682547001011]
State-of-the art methods typically focus on learning a single reward model.
We propose Multi-Objective Reinforced Active Learning (MORAL), a novel method for combining diverse demonstrations of social norms.
Our approach is able to interactively tune a deep RL agent towards a variety of preferences, while eliminating the need for computing multiple policies.
arXiv Detail & Related papers (2021-12-30T19:21:03Z) - Normative Disagreement as a Challenge for Cooperative AI [56.34005280792013]
We argue that typical cooperation-inducing learning algorithms fail to cooperate in bargaining problems.
We develop a class of norm-adaptive policies and show in experiments that these significantly increase cooperation.
arXiv Detail & Related papers (2021-11-27T11:37:42Z) - Cooperative and Competitive Biases for Multi-Agent Reinforcement
Learning [12.676356746752893]
Training a multi-agent reinforcement learning (MARL) algorithm is more challenging than training a single-agent reinforcement learning algorithm.
We propose an algorithm that boosts MARL training using the biased action information of other agents based on a friend-or-foe concept.
We empirically demonstrate that our algorithm outperforms existing algorithms in various mixed cooperative-competitive environments.
arXiv Detail & Related papers (2021-01-18T05:52:22Z) - Cooperative-Competitive Reinforcement Learning with History-Dependent
Rewards [12.41853254173419]
We show that an agent's decision-making problem can be modeled as an interactive partially observable Markov decision process (I-POMDP)
We present an interactive advantage actor-critic method (IA2C$+$), which combines the independent advantage actor-critic network with a belief filter.
Empirical results show that IA2C$+$ learns the optimal policy faster and more robustly than several other baselines.
arXiv Detail & Related papers (2020-10-15T21:37:07Z) - UneVEn: Universal Value Exploration for Multi-Agent Reinforcement
Learning [53.73686229912562]
We propose a novel MARL approach called Universal Value Exploration (UneVEn)
UneVEn learns a set of related tasks simultaneously with a linear decomposition of universal successor features.
Empirical results on a set of exploration games, challenging cooperative predator-prey tasks requiring significant coordination among agents, and StarCraft II micromanagement benchmarks show that UneVEn can solve tasks where other state-of-the-art MARL methods fail.
arXiv Detail & Related papers (2020-10-06T19:08:47Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.