Norm Enforcement with a Soft Touch: Faster Emergence, Happier Agents
- URL: http://arxiv.org/abs/2401.16461v3
- Date: Tue, 5 Mar 2024 10:58:33 GMT
- Title: Norm Enforcement with a Soft Touch: Faster Emergence, Happier Agents
- Authors: Sz-Ting Tzeng, Nirav Ajmeri, Munindar P. Singh
- Abstract summary: A multiagent system is a society of autonomous agents whose interactions can be regulated via social norms.
We think of these reactions by an agent to the satisfactory or unsatisfactory behaviors of another agent as communications from the first agent to the second agent.
We develop Nest, a framework that models social intelligence via a wider variety of communications and understanding of them than in previous work.
- Score: 15.315985512420568
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A multiagent system is a society of autonomous agents whose interactions can
be regulated via social norms. In general, the norms of a society are not
hardcoded but emerge from the agents' interactions. Specifically, how the
agents in a society react to each other's behavior and respond to the reactions
of others determines which norms emerge in the society. We think of these
reactions by an agent to the satisfactory or unsatisfactory behaviors of
another agent as communications from the first agent to the second agent.
Understanding these communications is a kind of social intelligence: these
communications provide natural drivers for norm emergence by pushing agents
toward certain behaviors, which can become established as norms. Whereas it is
well-known that sanctioning can lead to the emergence of norms, we posit that a
broader kind of social intelligence can prove more effective in promoting
cooperation in a multiagent system.
Accordingly, we develop Nest, a framework that models social intelligence via
a wider variety of communications and understanding of them than in previous
work. To evaluate Nest, we develop a simulated pandemic environment and conduct
simulation experiments to compare Nest with baselines considering a combination
of three kinds of social communication: sanction, tell, and hint.
We find that societies formed of Nest agents achieve norms faster. Moreover,
Nest agents effectively avoid undesirable consequences, which are negative
sanctions and deviation from goals, and yield higher satisfaction for
themselves than baseline agents despite requiring only an equivalent amount of
information.
Related papers
- SocialGFs: Learning Social Gradient Fields for Multi-Agent Reinforcement Learning [58.84311336011451]
We propose a novel gradient-based state representation for multi-agent reinforcement learning.
We employ denoising score matching to learn the social gradient fields (SocialGFs) from offline samples.
In practice, we integrate SocialGFs into the widely used multi-agent reinforcement learning algorithms, e.g., MAPPO.
arXiv Detail & Related papers (2024-05-03T04:12:19Z) - Agent Alignment in Evolving Social Norms [65.45423591744434]
We propose an evolutionary framework for agent evolution and alignment, named EvolutionaryAgent.
In an environment where social norms continuously evolve, agents better adapted to the current social norms will have a higher probability of survival and proliferation.
We show that EvolutionaryAgent can align progressively better with the evolving social norms while maintaining its proficiency in general tasks.
arXiv Detail & Related papers (2024-01-09T15:44:44Z) - Mediated Multi-Agent Reinforcement Learning [3.8581550679584473]
We show how a mediator can be trained alongside agents with policy gradient to maximize social welfare.
Our experiments in matrix and iterative games highlight the potential power of applying mediators in Multi-Agent Reinforcement Learning.
arXiv Detail & Related papers (2023-06-14T10:31:37Z) - Value Engineering for Autonomous Agents [3.6130723421895947]
Previous approaches have treated values as labels associated with some actions or states of the world, rather than as integral components of agent reasoning.
We propose a new AMA paradigm grounded in moral and social psychology, where values are instilled into agents as context-dependent goals.
We argue that this type of normative reasoning, where agents are endowed with an understanding of norms' moral implications, leads to value-awareness in autonomous agents.
arXiv Detail & Related papers (2023-02-17T08:52:15Z) - Aligning to Social Norms and Values in Interactive Narratives [89.82264844526333]
We focus on creating agents that act in alignment with socially beneficial norms and values in interactive narratives or text-based games.
We introduce the GALAD agent that uses the social commonsense knowledge present in specially trained language models to contextually restrict its action space to only those actions that are aligned with socially beneficial values.
arXiv Detail & Related papers (2022-05-04T09:54:33Z) - Normative Disagreement as a Challenge for Cooperative AI [56.34005280792013]
We argue that typical cooperation-inducing learning algorithms fail to cooperate in bargaining problems.
We develop a class of norm-adaptive policies and show in experiments that these significantly increase cooperation.
arXiv Detail & Related papers (2021-11-27T11:37:42Z) - Noe: Norms Emergence and Robustness Based on Emotions in Multiagent
Systems [0.0]
This paper investigates how modeling emotions affect the emergence and robustness of social norms via social simulation experiments.
We find that an ability in agents to consider emotional responses to the outcomes of norm satisfaction and violation promote norm compliance.
arXiv Detail & Related papers (2021-04-30T14:42:22Z) - Prosocial Norm Emergence in Multiagent Systems [14.431260905391138]
We consider a setting where not only the member agents are adaptive but also the multiagent system itself is adaptive.
We focus on prosocial norms, which help achieve positive outcomes for society and often provide guidance to agents to act in a manner that takes into account the welfare of others.
arXiv Detail & Related papers (2020-12-29T02:59:55Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.