Decentralized Adversarial Training over Graphs
- URL: http://arxiv.org/abs/2303.13326v1
- Date: Thu, 23 Mar 2023 15:05:16 GMT
- Title: Decentralized Adversarial Training over Graphs
- Authors: Ying Cao, Elsa Rizk, Stefan Vlaski, Ali H. Sayed
- Abstract summary: The vulnerability of machine learning models to adversarial attacks has been attracting considerable attention in recent years.
This work studies adversarial training over graphs, where individual agents are subjected to varied strength perturbation space.
- Score: 55.28669771020857
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The vulnerability of machine learning models to adversarial attacks has been
attracting considerable attention in recent years. Most existing studies focus
on the behavior of stand-alone single-agent learners. In comparison, this work
studies adversarial training over graphs, where individual agents are subjected
to perturbations of varied strength levels across space. It is expected that
interactions by linked agents, and the heterogeneity of the attack models that
are possible over the graph, can help enhance robustness in view of the
coordination power of the group. Using a min-max formulation of diffusion
learning, we develop a decentralized adversarial training framework for
multi-agent systems. We analyze the convergence properties of the proposed
scheme for both convex and non-convex environments, and illustrate the enhanced
robustness to adversarial attacks.
Related papers
- Adversarial Training for Defense Against Label Poisoning Attacks [53.893792844055106]
Label poisoning attacks pose significant risks to machine learning models.
We propose a novel adversarial training defense strategy based on support vector machines (SVMs) to counter these threats.
Our approach accommodates various model architectures and employs a projected gradient descent algorithm with kernel SVMs for adversarial training.
arXiv Detail & Related papers (2025-02-24T13:03:19Z) - Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - Ensemble Adversarial Defense via Integration of Multiple Dispersed Low Curvature Models [7.8245455684263545]
In this work, we aim to enhance ensemble diversity by reducing attack transferability.
We identify second-order gradients, which depict the loss curvature, as a key factor in adversarial robustness.
We introduce a novel regularizer to train multiple more-diverse low-curvature network models.
arXiv Detail & Related papers (2024-03-25T03:44:36Z) - Fake or Compromised? Making Sense of Malicious Clients in Federated
Learning [15.91062695812289]
We present a comprehensive analysis of the various poisoning attacks and defensive aggregation rules (AGRs) proposed in the literature.
To connect existing adversary models, we present a hybrid adversary model, which lies in the middle of the spectrum of adversaries.
We aim to provide practitioners and researchers with a clear understanding of the different types of threats they need to consider when designing FL systems.
arXiv Detail & Related papers (2024-03-10T21:37:21Z) - Adversarial Robustness on Image Classification with $k$-means [3.5385056709199536]
We evaluate the vulnerability of $k$-means clustering algorithms to adversarial attacks, emphasising the associated security risks.
We introduce and evaluate an adversarial training method that improves testing performance in adversarial scenarios.
arXiv Detail & Related papers (2023-12-15T04:51:43Z) - HC-Ref: Hierarchical Constrained Refinement for Robust Adversarial
Training of GNNs [7.635985143883581]
Adversarial training, which has been shown to be one of the most effective defense mechanisms against adversarial attacks in computer vision, holds great promise for enhancing the robustness of GNNs.
We propose a hierarchical constraint refinement framework (HC-Ref) that enhances the anti-perturbation capabilities of GNNs and downstream classifiers separately.
arXiv Detail & Related papers (2023-12-08T07:32:56Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Attacks on Robust Distributed Learning Schemes via Sensitivity Curve
Maximization [37.464005524259356]
We present a new attack based on sensitivity of curve (SCM)
We demonstrate that it is able to disrupt existing robust aggregation schemes by injecting small but effective perturbations.
arXiv Detail & Related papers (2023-04-27T08:41:57Z) - Multi-Agent Adversarial Training Using Diffusion Learning [55.28669771020857]
We propose a general adversarial training framework for multi-agent systems using diffusion learning.
We analyze the convergence properties of the proposed scheme for convex optimization problems, and illustrate its enhanced robustness to adversarial attacks.
arXiv Detail & Related papers (2023-03-03T14:05:59Z) - Improving Adversarial Robustness with Self-Paced Hard-Class Pair
Reweighting [5.084323778393556]
adversarial training with untargeted attacks is one of the most recognized methods.
We find that the naturally imbalanced inter-class semantic similarity makes those hard-class pairs to become the virtual targets of each other.
We propose to upweight hard-class pair loss in model optimization, which prompts learning discriminative features from hard classes.
arXiv Detail & Related papers (2022-10-26T22:51:36Z) - Enhancing Adversarial Training with Feature Separability [52.39305978984573]
We introduce a new concept of adversarial training graph (ATG) with which the proposed adversarial training with feature separability (ATFS) enables to boost the intra-class feature similarity and increase inter-class feature variance.
Through comprehensive experiments, we demonstrate that the proposed ATFS framework significantly improves both clean and robust performance.
arXiv Detail & Related papers (2022-05-02T04:04:23Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.