Multi-Agent Adversarial Training Using Diffusion Learning
- URL: http://arxiv.org/abs/2303.01936v1
- Date: Fri, 3 Mar 2023 14:05:59 GMT
- Title: Multi-Agent Adversarial Training Using Diffusion Learning
- Authors: Ying Cao, Elsa Rizk, Stefan Vlaski, Ali H. Sayed
- Abstract summary: We propose a general adversarial training framework for multi-agent systems using diffusion learning.
We analyze the convergence properties of the proposed scheme for convex optimization problems, and illustrate its enhanced robustness to adversarial attacks.
- Score: 55.28669771020857
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work focuses on adversarial learning over graphs. We propose a general
adversarial training framework for multi-agent systems using diffusion
learning. We analyze the convergence properties of the proposed scheme for
convex optimization problems, and illustrate its enhanced robustness to
adversarial attacks.
Related papers
- AI Safety in Practice: Enhancing Adversarial Robustness in Multimodal Image Captioning [0.0]
Multimodal machine learning models that combine visual and textual data are increasingly being deployed in critical applications.
This paper presents an effective strategy to enhance the robustness of multimodal image captioning models against adversarial attacks.
arXiv Detail & Related papers (2024-07-30T20:28:31Z) - A Model-Based Approach for Improving Reinforcement Learning Efficiency
Leveraging Expert Observations [9.240917262195046]
We propose an algorithm that automatically adjusts the weights of each component in the augmented loss function.
Experiments on a variety of continuous control tasks demonstrate that the proposed algorithm outperforms various benchmarks.
arXiv Detail & Related papers (2024-02-29T03:53:02Z) - Exact Subspace Diffusion for Decentralized Multitask Learning [17.592204922442832]
Distributed strategies for multitask learning induce relationships between agents in a more nuanced manner, and encourage collaboration without enforcing consensus.
We develop a generalization of the exact diffusion algorithm for subspace constrained multitask learning over networks, and derive an accurate expression for its mean-squared deviation.
We verify numerically the accuracy of the predicted performance expressions, as well as the improved performance of the proposed approach over alternatives based on approximate projections.
arXiv Detail & Related papers (2023-04-14T19:42:19Z) - Decentralized Adversarial Training over Graphs [55.28669771020857]
The vulnerability of machine learning models to adversarial attacks has been attracting considerable attention in recent years.
This work studies adversarial training over graphs, where individual agents are subjected to varied strength perturbation space.
arXiv Detail & Related papers (2023-03-23T15:05:16Z) - Enhancing Adversarial Training with Feature Separability [52.39305978984573]
We introduce a new concept of adversarial training graph (ATG) with which the proposed adversarial training with feature separability (ATFS) enables to boost the intra-class feature similarity and increase inter-class feature variance.
Through comprehensive experiments, we demonstrate that the proposed ATFS framework significantly improves both clean and robust performance.
arXiv Detail & Related papers (2022-05-02T04:04:23Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Stylized Adversarial Defense [105.88250594033053]
adversarial training creates perturbation patterns and includes them in the training set to robustify the model.
We propose to exploit additional information from the feature space to craft stronger adversaries.
Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses.
arXiv Detail & Related papers (2020-07-29T08:38:10Z) - Learning Multiplicative Interactions with Bayesian Neural Networks for
Visual-Inertial Odometry [44.209301916028124]
This paper presents an end-to-end multi-modal learning approach for Visual-Inertial Odometry (VIO)
It is specifically designed to exploit sensor complementarity in the light of sensor degradation scenarios.
arXiv Detail & Related papers (2020-07-15T11:39:29Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.