Inclusive Fitness as a Key Step Towards More Advanced Social Behaviors in Multi-Agent Reinforcement Learning Settings
- URL: http://arxiv.org/abs/2510.12555v1
- Date: Tue, 14 Oct 2025 14:20:01 GMT
- Title: Inclusive Fitness as a Key Step Towards More Advanced Social Behaviors in Multi-Agent Reinforcement Learning Settings
- Authors: Andries Rosseau, Raphaël Avalos, Ann Nowé,
- Abstract summary: We propose a novel multi-agent reinforcement learning framework where each agent is assigned a genotype and where reward functions are modelled after the concept of inclusive fitness.<n>An agent's genetic material may be shared with other agents, and our inclusive reward function naturally accounts for this.<n>We study the resulting social dynamics in two types of network games with prisoner's dilemmas and find that our results align with well-established principles from biology.
- Score: 6.220885697097764
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The competitive and cooperative forces of natural selection have driven the evolution of intelligence for millions of years, culminating in nature's vast biodiversity and the complexity of human minds. Inspired by this process, we propose a novel multi-agent reinforcement learning framework where each agent is assigned a genotype and where reward functions are modelled after the concept of inclusive fitness. An agent's genetic material may be shared with other agents, and our inclusive reward function naturally accounts for this. We study the resulting social dynamics in two types of network games with prisoner's dilemmas and find that our results align with well-established principles from biology, such as Hamilton's rule. Furthermore, we outline how this framework can extend to more open-ended environments with spatial and temporal structure, finite resources, and evolving populations. We hypothesize the emergence of an arms race of strategies, where each new strategy is a gradual improvement over earlier adaptations of other agents, effectively producing a multi-agent autocurriculum analogous to biological evolution. In contrast to the binary team-based structures prevalent in earlier research, our gene-based reward structure introduces a spectrum of cooperation ranging from full adversity to full cooperativeness based on genetic similarity, enabling unique non team-based social dynamics. For example, one agent having a mutual cooperative relationship with two other agents, while the two other agents behave adversarially towards each other. We argue that incorporating inclusive fitness in agents provides a foundation for the emergence of more strategically advanced and socially intelligent agents.
Related papers
- Embedded Universal Predictive Intelligence: a coherent framework for multi-agent learning [57.23345786304694]
We introduce a framework for prospective learning and embedded agency centered on self-prediction.<n>We show that in multi-agent settings, self-prediction enables agents to reason about others running similar algorithms.<n>We extend the theory of AIXI, and study universally intelligent embedded agents which start from a Solomonoff prior.
arXiv Detail & Related papers (2025-11-27T08:46:48Z) - Advances and Challenges in Foundation Agents: From Brain-Inspired Intelligence to Evolutionary, Collaborative, and Safe Systems [132.77459963706437]
This book provides a comprehensive overview, framing intelligent agents within modular, brain-inspired architectures.<n>It explores self-enhancement and adaptive evolution mechanisms, exploring how agents autonomously refine their capabilities.<n>It also examines the collective intelligence emerging from agent interactions, cooperation, and societal structures.
arXiv Detail & Related papers (2025-03-31T18:00:29Z) - Scaling Large Language Model-based Multi-Agent Collaboration [72.8998796426346]
Recent breakthroughs in large language model-driven autonomous agents have revealed that multi-agent collaboration often surpasses each individual through collective reasoning.<n>This study explores whether the continuous addition of collaborative agents can yield similar benefits.
arXiv Detail & Related papers (2024-06-11T11:02:04Z) - Enhancing Cooperation through Selective Interaction and Long-term Experiences in Multi-Agent Reinforcement Learning [10.932974027102619]
This study introduces a computational framework based on multi-agent reinforcement learning in the spatial Prisoner's Dilemma game.
By modelling each agent using two distinct Q-networks, we disentangle the coevolutionary dynamics between cooperation and interaction.
arXiv Detail & Related papers (2024-05-04T12:42:55Z) - Mathematics of multi-agent learning systems at the interface of game
theory and artificial intelligence [0.8049333067399385]
Evolutionary Game Theory and Artificial Intelligence are two fields that, at first glance, might seem distinct, but they have notable connections and intersections.
The former focuses on the evolution of behaviors (or strategies) in a population, where individuals interact with others and update their strategies based on imitation (or social learning)
The latter, meanwhile, is centered on machine learning algorithms and (deep) neural networks.
arXiv Detail & Related papers (2024-03-09T17:36:54Z) - DARLEI: Deep Accelerated Reinforcement Learning with Evolutionary
Intelligence [77.78795329701367]
We present DARLEI, a framework that combines evolutionary algorithms with parallelized reinforcement learning.
We characterize DARLEI's performance under various conditions, revealing factors impacting diversity of evolved morphologies.
We hope to extend DARLEI in future work to include interactions between diverse morphologies in richer environments.
arXiv Detail & Related papers (2023-12-08T16:51:10Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - AgentVerse: Facilitating Multi-Agent Collaboration and Exploring
Emergent Behaviors [93.38830440346783]
We propose a multi-agent framework framework that can collaboratively adjust its composition as a greater-than-the-sum-of-its-parts system.
Our experiments demonstrate that framework framework can effectively deploy multi-agent groups that outperform a single agent.
In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups.
arXiv Detail & Related papers (2023-08-21T16:47:11Z) - Towards a Unifying Model of Rationality in Multiagent Systems [11.321217099465196]
Multiagent systems need to cooperate with other agents (including humans) nearly as effectively as these agents cooperate with one another.
We propose a generic model of socially intelligent agents, which are individually rational learners that are also able to cooperate with one another.
We show how we can construct socially intelligent agents for different forms of regret.
arXiv Detail & Related papers (2023-05-29T13:18:43Z) - Improved cooperation by balancing exploration and exploitation in
intertemporal social dilemma tasks [2.541277269153809]
We propose a new learning strategy for achieving coordination by incorporating a learning rate that can balance exploration and exploitation.
We show that agents that use the simple strategy improve a relatively collective return in a decision task called the intertemporal social dilemma.
We also explore the effects of the diversity of learning rates on the population of reinforcement learning agents and show that agents trained in heterogeneous populations develop particularly coordinated policies.
arXiv Detail & Related papers (2021-10-19T08:40:56Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - Natural Emergence of Heterogeneous Strategies in Artificially
Intelligent Competitive Teams [0.0]
We develop a competitive multi agent environment called FortAttack in which two teams compete against each other.
We observe a natural emergence of heterogeneous behavior amongst homogeneous agents when such behavior can lead to the team's success.
We propose ensemble training, in which we utilize the evolved opponent strategies to train a single policy for friendly agents.
arXiv Detail & Related papers (2020-07-06T22:35:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.