Trust-based Consensus in Multi-Agent Reinforcement Learning Systems
- URL: http://arxiv.org/abs/2205.12880v2
- Date: Thu, 30 May 2024 15:04:27 GMT
- Title: Trust-based Consensus in Multi-Agent Reinforcement Learning Systems
- Authors: Ho Long Fung, Victor-Alexandru Darvariu, Stephen Hailes, Mirco Musolesi,
- Abstract summary: This paper investigates the problem of unreliable agents in multi-agent reinforcement learning (MARL)
We propose Reinforcement Learning-based Trusted Consensus (RLTC), a decentralized trust mechanism.
We empirically demonstrate that our trust mechanism is able to handle unreliable agents effectively, as evidenced by higher consensus success rates.
- Score: 5.778852464898369
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An often neglected issue in multi-agent reinforcement learning (MARL) is the potential presence of unreliable agents in the environment whose deviations from expected behavior can prevent a system from accomplishing its intended tasks. In particular, consensus is a fundamental underpinning problem of cooperative distributed multi-agent systems. Consensus requires different agents, situated in a decentralized communication network, to reach an agreement out of a set of initial proposals that they put forward. Learning-based agents should adopt a protocol that allows them to reach consensus despite having one or more unreliable agents in the system. This paper investigates the problem of unreliable agents in MARL, considering consensus as a case study. Echoing established results in the distributed systems literature, our experiments show that even a moderate fraction of such agents can greatly impact the ability of reaching consensus in a networked environment. We propose Reinforcement Learning-based Trusted Consensus (RLTC), a decentralized trust mechanism, in which agents can independently decide which neighbors to communicate with. We empirically demonstrate that our trust mechanism is able to handle unreliable agents effectively, as evidenced by higher consensus success rates.
Related papers
- Criticality and Safety Margins for Reinforcement Learning [53.10194953873209]
We seek to define a criticality framework with both a quantifiable ground truth and a clear significance to users.
We introduce true criticality as the expected drop in reward when an agent deviates from its policy for n consecutive random actions.
We also introduce the concept of proxy criticality, a low-overhead metric that has a statistically monotonic relationship to true criticality.
arXiv Detail & Related papers (2024-09-26T21:00:45Z) - Bayesian Methods for Trust in Collaborative Multi-Agent Autonomy [11.246557832016238]
In safety-critical and contested environments, adversaries may infiltrate and compromise a number of agents.
We analyze state of the art multi-target tracking algorithms under this compromised agent threat model.
We design a trust estimation framework using hierarchical Bayesian updating.
arXiv Detail & Related papers (2024-03-25T17:17:35Z) - Reaching Consensus in Cooperative Multi-Agent Reinforcement Learning
with Goal Imagination [16.74629849552254]
We propose a model-based consensus mechanism to explicitly coordinate multiple agents.
The proposed Multi-agent Goal Imagination (MAGI) framework guides agents to reach consensus with an Imagined common goal.
We show that such efficient consensus mechanism can guide all agents cooperatively reaching valuable future states.
arXiv Detail & Related papers (2024-03-05T18:07:34Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Mediated Multi-Agent Reinforcement Learning [3.8581550679584473]
We show how a mediator can be trained alongside agents with policy gradient to maximize social welfare.
Our experiments in matrix and iterative games highlight the potential power of applying mediators in Multi-Agent Reinforcement Learning.
arXiv Detail & Related papers (2023-06-14T10:31:37Z) - An Algorithm For Adversary Aware Decentralized Networked MARL [0.0]
We introduce vulnerabilities in the consensus updates of existing MARL algorithms.
We provide an algorithm that allows non-adversarial agents to reach a consensus in the presence of adversaries.
arXiv Detail & Related papers (2023-05-09T16:02:31Z) - On the Complexity of Multi-Agent Decision Making: From Learning in Games
to Partial Monitoring [105.13668993076801]
A central problem in the theory of multi-agent reinforcement learning (MARL) is to understand what structural conditions and algorithmic principles lead to sample-efficient learning guarantees.
We study this question in a general framework for interactive decision making with multiple agents.
We show that characterizing the statistical complexity for multi-agent decision making is equivalent to characterizing the statistical complexity of single-agent decision making.
arXiv Detail & Related papers (2023-05-01T06:46:22Z) - Coordinating Policies Among Multiple Agents via an Intelligent
Communication Channel [81.39444892747512]
In Multi-Agent Reinforcement Learning (MARL), specialized channels are often introduced that allow agents to communicate directly with one another.
We propose an alternative approach whereby agents communicate through an intelligent facilitator that learns to sift through and interpret signals provided by all agents to improve the agents' collective performance.
arXiv Detail & Related papers (2022-05-21T14:11:33Z) - Gaussian Process Based Message Filtering for Robust Multi-Agent
Cooperation in the Presence of Adversarial Communication [5.161531917413708]
We consider the problem of providing robustness to adversarial communication in multi-agent systems.
We propose a communication architecture based on Graph Neural Networks (GNNs)
We show that our filtering method is able to reduce the impact that non-cooperative agents cause.
arXiv Detail & Related papers (2020-12-01T14:21:58Z) - Learning to Communicate and Correct Pose Errors [75.03747122616605]
We study the setting proposed in V2VNet, where nearby self-driving vehicles jointly perform object detection and motion forecasting in a cooperative manner.
We propose a novel neural reasoning framework that learns to communicate, to estimate potential errors, and to reach a consensus about those errors.
arXiv Detail & Related papers (2020-11-10T18:19:40Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.