Innate-Values-driven Reinforcement Learning based Cooperative Multi-Agent Cognitive Modeling
- URL: http://arxiv.org/abs/2401.05572v2
- Date: Mon, 09 Jun 2025 19:52:29 GMT
- Title: Innate-Values-driven Reinforcement Learning based Cooperative Multi-Agent Cognitive Modeling
- Authors: Qin Yang,
- Abstract summary: This paper proposes a general innate-values reinforcement learning architecture from the individual preferences angle.<n>We tested the Multi-AgentL Actor-Critic Model in different StarCraft Multi-Agent Challenge settings.
- Score: 1.8220718426493654
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In multi-agent systems (MAS), the dynamic interaction among multiple decision-makers is driven by their innate values, affecting the environment's state, and can cause specific behavioral patterns to emerge. On the other hand, innate values in cognitive modeling reflect individual interests and preferences for specific tasks and drive them to develop diverse skills and plans, satisfying their various needs and achieving common goals in cooperation. Therefore, building the awareness of AI agents to balance the group utilities and system costs and meet group members' needs in their cooperation is a crucial problem for individuals learning to support their community and even integrate into human society in the long term. However, the current MAS reinforcement learning domain lacks a general intrinsic model to describe agents' dynamic motivation for decision-making and learning from an individual needs perspective in their cooperation. To address the gap, this paper proposes a general MAS innate-values reinforcement learning (IVRL) architecture from the individual preferences angle. We tested the Multi-Agent IVRL Actor-Critic Model in different StarCraft Multi-Agent Challenge (SMAC) settings, which demonstrated its potential to organize the group's behaviours to achieve better performance.
Related papers
- Beyond Brainstorming: What Drives High-Quality Scientific Ideas? Lessons from Multi-Agent Collaboration [59.41889496960302]
This paper investigates whether structured multi-agent discussions can surpass solitary ideation.<n>We propose a cooperative multi-agent framework for generating research proposals.<n>We employ a comprehensive protocol with agent-based scoring and human review across dimensions such as novelty, strategic vision, and integration depth.
arXiv Detail & Related papers (2025-08-06T15:59:18Z) - Collaborative Learning in Agentic Systems: A Collective AI is Greater Than the Sum of Its Parts [12.471774408499817]
We introduce Modular Sharing and Composition in Collective Learning (MOSAIC)<n>MOSAIC is an agentic algorithm that allows multiple agents to independently solve different tasks.<n>Results on a set of RL benchmarks show that MOSAIC has a greater sample efficiency than isolated learners.
arXiv Detail & Related papers (2025-06-05T20:38:11Z) - Rationality based Innate-Values-driven Reinforcement Learning [1.8220718426493654]
Innate values describe agents' intrinsic motivations, which reflect their inherent interests and preferences to pursue goals.
It is an excellent model to describe the innate-values-driven (IV) behaviors of AI agents.
This paper proposes a hierarchical reinforcement learning model -- innate-values-driven reinforcement learning model.
arXiv Detail & Related papers (2024-11-14T03:28:02Z) - Enhancing Heterogeneous Multi-Agent Cooperation in Decentralized MARL via GNN-driven Intrinsic Rewards [1.179778723980276]
Multi-agent Reinforcement Learning (MARL) is emerging as a key framework for sequential decision-making and control tasks.
The deployment of these systems in real-world scenarios often requires decentralized training, a diverse set of agents, and learning from infrequent environmental reward signals.
We propose the CoHet algorithm, which utilizes a novel Graph Neural Network (GNN) based intrinsic motivation to facilitate the learning of heterogeneous agent policies.
arXiv Detail & Related papers (2024-08-12T21:38:40Z) - Efficient Adaptation in Mixed-Motive Environments via Hierarchical Opponent Modeling and Planning [51.52387511006586]
We propose Hierarchical Opponent modeling and Planning (HOP), a novel multi-agent decision-making algorithm.
HOP is hierarchically composed of two modules: an opponent modeling module that infers others' goals and learns corresponding goal-conditioned policies.
HOP exhibits superior few-shot adaptation capabilities when interacting with various unseen agents, and excels in self-play scenarios.
arXiv Detail & Related papers (2024-06-12T08:48:06Z) - Organizing a Society of Language Models: Structures and Mechanisms for Enhanced Collective Intelligence [0.0]
This paper introduces a transformative approach by organizing Large Language Models into community-based structures.
We investigate different organizational models-hierarchical, flat, dynamic, and federated-each presenting unique benefits and challenges for collaborative AI systems.
The implementation of such communities holds substantial promise for improve problem-solving capabilities in AI.
arXiv Detail & Related papers (2024-05-06T20:15:45Z) - DCIR: Dynamic Consistency Intrinsic Reward for Multi-Agent Reinforcement
Learning [84.22561239481901]
We propose a new approach that enables agents to learn whether their behaviors should be consistent with that of other agents.
We evaluate DCIR in multiple environments including Multi-agent Particle, Google Research Football and StarCraft II Micromanagement.
arXiv Detail & Related papers (2023-12-10T06:03:57Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - AgentVerse: Facilitating Multi-Agent Collaboration and Exploring
Emergent Behaviors [93.38830440346783]
We propose a multi-agent framework framework that can collaboratively adjust its composition as a greater-than-the-sum-of-its-parts system.
Our experiments demonstrate that framework framework can effectively deploy multi-agent groups that outperform a single agent.
In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups.
arXiv Detail & Related papers (2023-08-21T16:47:11Z) - Learning in Cooperative Multiagent Systems Using Cognitive and Machine
Models [1.0742675209112622]
Multi-Agent Systems (MAS) are critical for many applications requiring collaboration and coordination with humans.
One major challenge is the simultaneous learning and interaction of independent agents in dynamic environments.
We propose three variants of Multi-Agent IBL models (MAIBL)
We demonstrate that the MAIBL models exhibit faster learning and achieve better coordination in a dynamic CMOTP task with various settings of rewards compared to current MADRL models.
arXiv Detail & Related papers (2023-08-18T00:39:06Z) - Learning Reward Machines in Cooperative Multi-Agent Tasks [75.79805204646428]
This paper presents a novel approach to Multi-Agent Reinforcement Learning (MARL)
It combines cooperative task decomposition with the learning of reward machines (RMs) encoding the structure of the sub-tasks.
The proposed method helps deal with the non-Markovian nature of the rewards in partially observable environments.
arXiv Detail & Related papers (2023-03-24T15:12:28Z) - LDSA: Learning Dynamic Subtask Assignment in Cooperative Multi-Agent
Reinforcement Learning [122.47938710284784]
We propose a novel framework for learning dynamic subtask assignment (LDSA) in cooperative MARL.
To reasonably assign agents to different subtasks, we propose an ability-based subtask selection strategy.
We show that LDSA learns reasonable and effective subtask assignment for better collaboration.
arXiv Detail & Related papers (2022-05-05T10:46:16Z) - Modeling Bounded Rationality in Multi-Agent Simulations Using Rationally
Inattentive Reinforcement Learning [85.86440477005523]
We study more human-like RL agents which incorporate an established model of human-irrationality, the Rational Inattention (RI) model.
RIRL models the cost of cognitive information processing using mutual information.
We show that using RIRL yields a rich spectrum of new equilibrium behaviors that differ from those found under rational assumptions.
arXiv Detail & Related papers (2022-01-18T20:54:00Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z) - Reward Machines for Cooperative Multi-Agent Reinforcement Learning [30.84689303706561]
In cooperative multi-agent reinforcement learning, a collection of agents learns to interact in a shared environment to achieve a common goal.
We propose the use of reward machines (RM) -- Mealy machines used as structured representations of reward functions -- to encode the team's task.
The proposed novel interpretation of RMs in the multi-agent setting explicitly encodes required teammate interdependencies, allowing the team-level task to be decomposed into sub-tasks for individual agents.
arXiv Detail & Related papers (2020-07-03T23:08:14Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z) - Randomized Entity-wise Factorization for Multi-Agent Reinforcement
Learning [59.62721526353915]
Multi-agent settings in the real world often involve tasks with varying types and quantities of agents and non-agent entities.
Our method aims to leverage these commonalities by asking the question: What is the expected utility of each agent when only considering a randomly selected sub-group of its observed entities?''
arXiv Detail & Related papers (2020-06-07T18:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.