Decentralized Adversarial Training over Graphs
- URL: http://arxiv.org/abs/2303.13326v1
- Date: Thu, 23 Mar 2023 15:05:16 GMT
- Title: Decentralized Adversarial Training over Graphs
- Authors: Ying Cao, Elsa Rizk, Stefan Vlaski, Ali H. Sayed
- Abstract summary: The vulnerability of machine learning models to adversarial attacks has been attracting considerable attention in recent years.
This work studies adversarial training over graphs, where individual agents are subjected to varied strength perturbation space.
- Score: 55.28669771020857
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The vulnerability of machine learning models to adversarial attacks has been
attracting considerable attention in recent years. Most existing studies focus
on the behavior of stand-alone single-agent learners. In comparison, this work
studies adversarial training over graphs, where individual agents are subjected
to perturbations of varied strength levels across space. It is expected that
interactions by linked agents, and the heterogeneity of the attack models that
are possible over the graph, can help enhance robustness in view of the
coordination power of the group. Using a min-max formulation of diffusion
learning, we develop a decentralized adversarial training framework for
multi-agent systems. We analyze the convergence properties of the proposed
scheme for both convex and non-convex environments, and illustrate the enhanced
robustness to adversarial attacks.
Related papers
- Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - Ensemble Adversarial Defense via Integration of Multiple Dispersed Low Curvature Models [7.8245455684263545]
In this work, we aim to enhance ensemble diversity by reducing attack transferability.
We identify second-order gradients, which depict the loss curvature, as a key factor in adversarial robustness.
We introduce a novel regularizer to train multiple more-diverse low-curvature network models.
arXiv Detail & Related papers (2024-03-25T03:44:36Z) - Fake or Compromised? Making Sense of Malicious Clients in Federated
Learning [15.91062695812289]
We present a comprehensive analysis of the various poisoning attacks and defensive aggregation rules (AGRs) proposed in the literature.
To connect existing adversary models, we present a hybrid adversary model, which lies in the middle of the spectrum of adversaries.
We aim to provide practitioners and researchers with a clear understanding of the different types of threats they need to consider when designing FL systems.
arXiv Detail & Related papers (2024-03-10T21:37:21Z) - HC-Ref: Hierarchical Constrained Refinement for Robust Adversarial
Training of GNNs [7.635985143883581]
Adversarial training, which has been shown to be one of the most effective defense mechanisms against adversarial attacks in computer vision, holds great promise for enhancing the robustness of GNNs.
We propose a hierarchical constraint refinement framework (HC-Ref) that enhances the anti-perturbation capabilities of GNNs and downstream classifiers separately.
arXiv Detail & Related papers (2023-12-08T07:32:56Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Attacks on Robust Distributed Learning Schemes via Sensitivity Curve
Maximization [37.464005524259356]
We present a new attack based on sensitivity of curve (SCM)
We demonstrate that it is able to disrupt existing robust aggregation schemes by injecting small but effective perturbations.
arXiv Detail & Related papers (2023-04-27T08:41:57Z) - Multi-Agent Adversarial Training Using Diffusion Learning [55.28669771020857]
We propose a general adversarial training framework for multi-agent systems using diffusion learning.
We analyze the convergence properties of the proposed scheme for convex optimization problems, and illustrate its enhanced robustness to adversarial attacks.
arXiv Detail & Related papers (2023-03-03T14:05:59Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.