LAS-AT: Adversarial Training with Learnable Attack Strategy
- URL: http://arxiv.org/abs/2203.06616v1
- Date: Sun, 13 Mar 2022 10:21:26 GMT
- Title: LAS-AT: Adversarial Training with Learnable Attack Strategy
- Authors: Xiaojun Jia, Yong Zhang, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao
- Abstract summary: "Learnable attack strategy", dubbed LAS-AT, learns to automatically produce attack strategies to improve the model robustness.
Our framework is composed of a target network that uses AEs for training to improve robustness and a strategy network that produces attack strategies to control the AE generation.
- Score: 82.88724890186094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial training (AT) is always formulated as a minimax problem, of which
the performance depends on the inner optimization that involves the generation
of adversarial examples (AEs). Most previous methods adopt Projected Gradient
Decent (PGD) with manually specifying attack parameters for AE generation. A
combination of the attack parameters can be referred to as an attack strategy.
Several works have revealed that using a fixed attack strategy to generate AEs
during the whole training phase limits the model robustness and propose to
exploit different attack strategies at different training stages to improve
robustness. But those multi-stage hand-crafted attack strategies need much
domain expertise, and the robustness improvement is limited. In this paper, we
propose a novel framework for adversarial training by introducing the concept
of "learnable attack strategy", dubbed LAS-AT, which learns to automatically
produce attack strategies to improve the model robustness. Our framework is
composed of a target network that uses AEs for training to improve robustness
and a strategy network that produces attack strategies to control the AE
generation. Experimental evaluations on three benchmark databases demonstrate
the superiority of the proposed method. The code is released at
https://github.com/jiaxiaojunQAQ/LAS-AT.
Related papers
- Optimizing Cyber Defense in Dynamic Active Directories through Reinforcement Learning [10.601458163651582]
This paper addresses the absence of effective edge-blocking ACO strategies in dynamic, real-world networks.
It specifically targets the cybersecurity vulnerabilities of organizational Active Directory (AD) systems.
Unlike the existing literature on edge-blocking defenses which considers AD systems as static entities, our study counters this by recognizing their dynamic nature.
arXiv Detail & Related papers (2024-06-28T01:37:46Z) - Analysis and Extensions of Adversarial Training for Video Classification [0.0]
We show that generating optimal attacks for video requires carefully tuning the attack parameters, especially the step size.
We propose three defenses against attacks with variable attack budgets.
Experiments on the UCF101 dataset demonstrate that the proposed methods improve adversarial robustness against multiple attack types.
arXiv Detail & Related papers (2022-06-16T06:49:01Z) - Projective Ranking-based GNN Evasion Attacks [52.85890533994233]
Graph neural networks (GNNs) offer promising learning methods for graph-related tasks.
GNNs are at risk of adversarial attacks.
arXiv Detail & Related papers (2022-02-25T21:52:09Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Robust Federated Learning with Attack-Adaptive Aggregation [45.60981228410952]
Federated learning is vulnerable to various attacks, such as model poisoning and backdoor attacks.
We propose an attack-adaptive aggregation strategy to defend against various attacks for robust learning.
arXiv Detail & Related papers (2021-02-10T04:23:23Z) - Self-Progressing Robust Training [146.8337017922058]
Current robust training methods such as adversarial training explicitly uses an "attack" to generate adversarial examples.
We propose a new framework called SPROUT, self-progressing robust training.
Our results shed new light on scalable, effective and attack-independent robust training methods.
arXiv Detail & Related papers (2020-12-22T00:45:24Z) - Boosting Adversarial Training with Hypersphere Embedding [53.75693100495097]
Adversarial training is one of the most effective defenses against adversarial attacks for deep learning models.
In this work, we advocate incorporating the hypersphere embedding mechanism into the AT procedure.
We validate our methods under a wide range of adversarial attacks on the CIFAR-10 and ImageNet datasets.
arXiv Detail & Related papers (2020-02-20T08:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.