Multi-Agent Adversarial Attacks for Multi-Channel Communications
- URL: http://arxiv.org/abs/2201.09149v1
- Date: Sat, 22 Jan 2022 23:57:00 GMT
- Title: Multi-Agent Adversarial Attacks for Multi-Channel Communications
- Authors: Juncheng Dong, Suya Wu, Mohammadreza Sultani, Vahid Tarokh
- Abstract summary: We propose a multi-agent adversary system (MAAS) for modeling and analyzing adversaries in a wireless communication scenario.
By modeling the adversaries as learning agents, we show that the proposed MAAS is able to successfully choose the transmitted channel(s) and their respective allocated power(s) without any prior knowledge of the sender strategy.
- Score: 24.576538640840976
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently Reinforcement Learning (RL) has been applied as an anti-adversarial
remedy in wireless communication networks. However, studying the RL-based
approaches from the adversary's perspective has received little attention.
Additionally, RL-based approaches in an anti-adversary or adversarial paradigm
mostly consider single-channel communication (either channel selection or
single channel power control), while multi-channel communication is more common
in practice. In this paper, we propose a multi-agent adversary system (MAAS)
for modeling and analyzing adversaries in a wireless communication scenario by
careful design of the reward function under realistic communication scenarios.
In particular, by modeling the adversaries as learning agents, we show that the
proposed MAAS is able to successfully choose the transmitted channel(s) and
their respective allocated power(s) without any prior knowledge of the sender
strategy. Compared to the single-agent adversary (SAA), multi-agents in MAAS
can achieve significant reduction in signal-to-noise ratio (SINR) under the
same power constraints and partial observability, while providing improved
stability and a more efficient learning process. Moreover, through empirical
studies we show that the results in simulation are close to the ones in
communication in reality, a conclusion that is pivotal to the validity of
performance of agents evaluated in simulations.
Related papers
- From Novice to Expert: LLM Agent Policy Optimization via Step-wise Reinforcement Learning [62.54484062185869]
We introduce StepAgent, which utilizes step-wise reward to optimize the agent's reinforcement learning process.
We propose implicit-reward and inverse reinforcement learning techniques to facilitate agent reflection and policy adjustment.
arXiv Detail & Related papers (2024-11-06T10:35:11Z) - Learning Emergence of Interaction Patterns across Independent RL Agents in Multi-Agent Environments [3.0284592792243794]
Bottom Up Network (BUN) treats the collective of multi-agents as a unified entity.
Our empirical evaluations across a variety of cooperative multi-agent scenarios, including tasks such as cooperative navigation and traffic control, consistently demonstrate BUN's superiority over baseline methods with substantially reduced computational costs.
arXiv Detail & Related papers (2024-10-03T14:25:02Z) - Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.
Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - Multi-Agent Probabilistic Ensembles with Trajectory Sampling for Connected Autonomous Vehicles [12.71628954436973]
We propose a decentralized Multi-Agent Probabilistic Ensembles with Trajectory Sampling MA-PETS.
In particular, in order to better capture the uncertainty of the unknown environment, MA-PETS leverages Probabilistic Ensemble neural networks.
We empirically demonstrate the superiority of MA-PETS in terms of the sample efficiency comparable to MFBL.
arXiv Detail & Related papers (2023-12-21T14:55:21Z) - Exchange-of-Thought: Enhancing Large Language Model Capabilities through
Cross-Model Communication [76.04373033082948]
Large Language Models (LLMs) have recently made significant strides in complex reasoning tasks through the Chain-of-Thought technique.
We propose Exchange-of-Thought (EoT), a novel framework that enables cross-model communication during problem-solving.
arXiv Detail & Related papers (2023-12-04T11:53:56Z) - Decentralized Learning over Wireless Networks: The Effect of Broadcast
with Random Access [56.91063444859008]
We investigate the impact of broadcast transmission and probabilistic random access policy on the convergence performance of D-SGD.
Our results demonstrate that optimizing the access probability to maximize the expected number of successful links is a highly effective strategy for accelerating the system convergence.
arXiv Detail & Related papers (2023-05-12T10:32:26Z) - A Decentralized Communication Framework based on Dual-Level Recurrence
for Multi-Agent Reinforcement Learning [5.220940151628735]
We present a dual-level recurrent communication framework for multi-agent systems.
The first recurrence occurs in the communication sequence and is used to transmit communication data among agents.
The second recurrence is based on the time sequence and combines the historical observations for each agent.
arXiv Detail & Related papers (2022-02-22T01:36:59Z) - Learning Selective Communication for Multi-Agent Path Finding [18.703918339797283]
Decision Causal Communication (DCC) is a simple yet efficient model to enable agents to select neighbors to conduct communication.
DCC is suitable for decentralized execution to handle large scale problems.
arXiv Detail & Related papers (2021-09-12T03:07:20Z) - Adversarial Attacks On Multi-Agent Communication [80.4392160849506]
Modern autonomous systems will soon be deployed at scale, opening up the possibility for cooperative multi-agent systems.
Such advantages rely heavily on communication channels which have been shown to be vulnerable to security breaches.
In this paper, we explore such adversarial attacks in a novel multi-agent setting where agents communicate by sharing learned intermediate representations.
arXiv Detail & Related papers (2021-01-17T00:35:26Z) - FedRec: Federated Learning of Universal Receivers over Fading Channels [92.15358738530037]
We propose a neural network-based symbol detection technique for downlink fading channels.
Multiple users collaborate to jointly learn a universal data-driven detector, hence the name FedRec.
The performance of the resulting receiver is shown to approach the MAP performance in diverse channel conditions without requiring knowledge of the fading statistics.
arXiv Detail & Related papers (2020-11-14T11:29:55Z) - The Emergence of Adversarial Communication in Multi-Agent Reinforcement
Learning [6.18778092044887]
Many real-world problems require the coordination of multiple autonomous agents.
Recent work has shown the promise of Graph Neural Networks (GNNs) to learn explicit communication strategies that enable complex multi-agent coordination.
We show how a single self-interested agent is capable of learning highly manipulative communication strategies that allows it to significantly outperform a cooperative team of agents.
arXiv Detail & Related papers (2020-08-06T12:48:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.