A mechanism of Individualistic Indirect Reciprocity with internal and
external dynamics
- URL: http://arxiv.org/abs/2105.14144v1
- Date: Fri, 28 May 2021 23:28:50 GMT
- Title: A mechanism of Individualistic Indirect Reciprocity with internal and
external dynamics
- Authors: Mario Ignacio Gonz\'alez Silva, Ricardo Armando Gonz\'alez Silva,
H\'ector Alfonso Ju\'arez L\'opez and Antonio Aguilera Ontiveros
- Abstract summary: This research proposes a new variant of Nowak and Sigmund model, focused on agents' attitude.
Using Agent-Based Model and a Data Science method, we show on simulation results that the discriminatory stance of the agents prevails in most cases.
The results also show that when the reputation of others is unknown, with a high obstinacy and high cooperation demand, a heterogeneous society is obtained.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The cooperation mechanism of indirect reciprocity has been studied by making
multiple variations of its parts. This research proposes a new variant of Nowak
and Sigmund model, focused on agents' attitude; it is called Individualistic
Indirect Reciprocity. In our model, an agent reinforces its strategy to the
extent to which it makes a profit. We also include conditions related to the
environment, visibility of agents, cooperation demand, and the attitude of an
agent to maintain his cooperation strategy. Using Agent-Based Model and a Data
Science method, we show on simulation results that the discriminatory stance of
the agents prevails in most cases. In general, cooperators only appear in
conditions with low visibility of reputation and a high degree of cooperation
demand. The results also show that when the reputation of others is unknown,
with a high obstinacy and high cooperation demand, a heterogeneous society is
obtained. The simulations show a wide diversity of scenarios, centralized,
polarized, and mixed societies.
Related papers
- Can Agents Spontaneously Form a Society? Introducing a Novel Architecture for Generative Multi-Agents to Elicit Social Emergence [0.11249583407496219]
We introduce a generative agent architecture called ITCMA-S, which includes a basic framework for individual agents and a framework that supports social interactions among multi-agents.
This architecture enables agents to identify and filter out behaviors that are detrimental to social interactions, guiding them to choose more favorable actions.
arXiv Detail & Related papers (2024-09-10T13:39:29Z) - Enhancing Heterogeneous Multi-Agent Cooperation in Decentralized MARL via GNN-driven Intrinsic Rewards [1.179778723980276]
Multi-agent Reinforcement Learning (MARL) is emerging as a key framework for sequential decision-making and control tasks.
The deployment of these systems in real-world scenarios often requires decentralized training, a diverse set of agents, and learning from infrequent environmental reward signals.
We propose the CoHet algorithm, which utilizes a novel Graph Neural Network (GNN) based intrinsic motivation to facilitate the learning of heterogeneous agent policies.
arXiv Detail & Related papers (2024-08-12T21:38:40Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Scaling Large-Language-Model-based Multi-Agent Collaboration [75.5241464256688]
Pioneering advancements in large language model-powered agents have underscored the design pattern of multi-agent collaboration.
Inspired by the neural scaling law, this study investigates whether a similar principle applies to increasing agents in multi-agent collaboration.
arXiv Detail & Related papers (2024-06-11T11:02:04Z) - Situation-Dependent Causal Influence-Based Cooperative Multi-agent
Reinforcement Learning [18.054709749075194]
We propose a novel MARL algorithm named Situation-Dependent Causal Influence-Based Cooperative Multi-agent Reinforcement Learning (SCIC)
Our approach aims to detect inter-agent causal influences in specific situations based on the criterion using causal intervention and conditional mutual information.
The resulting update links coordinated exploration and intrinsic reward distribution, which enhance overall collaboration and performance.
arXiv Detail & Related papers (2023-12-15T05:09:32Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Stubborn: An Environment for Evaluating Stubbornness between Agents with
Aligned Incentives [4.022057598291766]
We present Stubborn, an environment for evaluating stubbornness between agents with fully-aligned incentives.
In our preliminary results, the agents learn to use their partner's stubbornness as a signal for improving the choices that they make in the environment.
arXiv Detail & Related papers (2023-04-24T17:19:15Z) - Modeling Bounded Rationality in Multi-Agent Simulations Using Rationally
Inattentive Reinforcement Learning [85.86440477005523]
We study more human-like RL agents which incorporate an established model of human-irrationality, the Rational Inattention (RI) model.
RIRL models the cost of cognitive information processing using mutual information.
We show that using RIRL yields a rich spectrum of new equilibrium behaviors that differ from those found under rational assumptions.
arXiv Detail & Related papers (2022-01-18T20:54:00Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z) - Multi-Agent Interactions Modeling with Correlated Policies [53.38338964628494]
In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework.
We develop a Decentralized Adrial Imitation Learning algorithm with Correlated policies (CoDAIL)
Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators.
arXiv Detail & Related papers (2020-01-04T17:31:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.