SocialGFs: Learning Social Gradient Fields for Multi-Agent Reinforcement Learning
- URL: http://arxiv.org/abs/2405.01839v1
- Date: Fri, 3 May 2024 04:12:19 GMT
- Title: SocialGFs: Learning Social Gradient Fields for Multi-Agent Reinforcement Learning
- Authors: Qian Long, Fangwei Zhong, Mingdong Wu, Yizhou Wang, Song-Chun Zhu,
- Abstract summary: We propose a novel gradient-based state representation for multi-agent reinforcement learning.
We employ denoising score matching to learn the social gradient fields (SocialGFs) from offline samples.
In practice, we integrate SocialGFs into the widely used multi-agent reinforcement learning algorithms, e.g., MAPPO.
- Score: 58.84311336011451
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Multi-agent systems (MAS) need to adaptively cope with dynamic environments, changing agent populations, and diverse tasks. However, most of the multi-agent systems cannot easily handle them, due to the complexity of the state and task space. The social impact theory regards the complex influencing factors as forces acting on an agent, emanating from the environment, other agents, and the agent's intrinsic motivation, referring to the social force. Inspired by this concept, we propose a novel gradient-based state representation for multi-agent reinforcement learning. To non-trivially model the social forces, we further introduce a data-driven method, where we employ denoising score matching to learn the social gradient fields (SocialGFs) from offline samples, e.g., the attractive or repulsive outcomes of each force. During interactions, the agents take actions based on the multi-dimensional gradients to maximize their own rewards. In practice, we integrate SocialGFs into the widely used multi-agent reinforcement learning algorithms, e.g., MAPPO. The empirical results reveal that SocialGFs offer four advantages for multi-agent systems: 1) they can be learned without requiring online interaction, 2) they demonstrate transferability across diverse tasks, 3) they facilitate credit assignment in challenging reward settings, and 4) they are scalable with the increasing number of agents.
Related papers
- OASIS: Open Agent Social Interaction Simulations with One Million Agents [147.2538500202457]
We propose a scalable social media simulator based on real-world social media platforms.
OASIS supports large-scale user simulations capable of modeling up to one million users.
We replicate various social phenomena, including information spreading, group polarization, and herd effects across X and Reddit platforms.
arXiv Detail & Related papers (2024-11-18T13:57:35Z) - Factorised Active Inference for Strategic Multi-Agent Interactions [1.9389881806157316]
Two complementary approaches can be integrated to this end.
The Active Inference framework (AIF) describes how agents employ a generative model to adapt their beliefs about and behaviour within their environment.
Game theory formalises strategic interactions between agents with potentially competing objectives.
We propose a factorisation of the generative model whereby each agent maintains explicit, individual-level beliefs about the internal states of other agents, and uses them for strategic planning in a joint context.
arXiv Detail & Related papers (2024-11-11T21:04:43Z) - Multi-Agents are Social Groups: Investigating Social Influence of Multiple Agents in Human-Agent Interactions [7.421573539569854]
We investigate whether a group of AI agents can create social pressure on users to agree with them.
We found that conversing with multiple agents increased the social pressure felt by participants.
Our study shows the potential advantages of multi-agent systems over single-agent platforms in causing opinion change.
arXiv Detail & Related papers (2024-11-07T10:00:46Z) - AdaSociety: An Adaptive Environment with Social Structures for Multi-Agent Decision-Making [45.179910497107606]
We introduce AdaSociety, a customizable multi-agent environment featuring expanding state and action spaces.
As agents progress, the environment adaptively generates new tasks with social structures for agents to undertake.
AdaSociety serves as a valuable research platform for exploring intelligence in diverse physical and social settings.
arXiv Detail & Related papers (2024-11-06T12:19:01Z) - Active Legibility in Multiagent Reinforcement Learning [3.7828554251478734]
The legibility-oriented framework allows agents to conduct legible actions so as to help others optimise their behaviors.
The experimental results demonstrate that the new framework is more efficient and costs less training time compared to several multiagent reinforcement learning algorithms.
arXiv Detail & Related papers (2024-10-28T12:15:49Z) - DCIR: Dynamic Consistency Intrinsic Reward for Multi-Agent Reinforcement
Learning [84.22561239481901]
We propose a new approach that enables agents to learn whether their behaviors should be consistent with that of other agents.
We evaluate DCIR in multiple environments including Multi-agent Particle, Google Research Football and StarCraft II Micromanagement.
arXiv Detail & Related papers (2023-12-10T06:03:57Z) - Neural Amortized Inference for Nested Multi-agent Reasoning [54.39127942041582]
We propose a novel approach to bridge the gap between human-like inference capabilities and computational limitations.
We evaluate our method in two challenging multi-agent interaction domains.
arXiv Detail & Related papers (2023-08-21T22:40:36Z) - AgentVerse: Facilitating Multi-Agent Collaboration and Exploring
Emergent Behaviors [93.38830440346783]
We propose a multi-agent framework framework that can collaboratively adjust its composition as a greater-than-the-sum-of-its-parts system.
Our experiments demonstrate that framework framework can effectively deploy multi-agent groups that outperform a single agent.
In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups.
arXiv Detail & Related papers (2023-08-21T16:47:11Z) - Multiagent Deep Reinforcement Learning: Challenges and Directions
Towards Human-Like Approaches [0.0]
We present the most common multiagent problem representations and their main challenges.
We identify five research areas that address one or more of these challenges.
We suggest that, for multiagent reinforcement learning to be successful, future research addresses these challenges with an interdisciplinary approach.
arXiv Detail & Related papers (2021-06-29T19:53:15Z) - Emergent Social Learning via Multi-agent Reinforcement Learning [91.57176641192771]
Social learning is a key component of human and animal intelligence.
This paper investigates whether independent reinforcement learning agents can learn to use social learning to improve their performance.
arXiv Detail & Related papers (2020-10-01T17:54:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.