Analysis and Extensions of Adversarial Training for Video Classification
- URL: http://arxiv.org/abs/2206.07953v1
- Date: Thu, 16 Jun 2022 06:49:01 GMT
- Title: Analysis and Extensions of Adversarial Training for Video Classification
- Authors: Kaleab A. Kinfu and Ren\'e Vidal
- Abstract summary: We show that generating optimal attacks for video requires carefully tuning the attack parameters, especially the step size.
We propose three defenses against attacks with variable attack budgets.
Experiments on the UCF101 dataset demonstrate that the proposed methods improve adversarial robustness against multiple attack types.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial training (AT) is a simple yet effective defense against
adversarial attacks to image classification systems, which is based on
augmenting the training set with attacks that maximize the loss. However, the
effectiveness of AT as a defense for video classification has not been
thoroughly studied. Our first contribution is to show that generating optimal
attacks for video requires carefully tuning the attack parameters, especially
the step size. Notably, we show that the optimal step size varies linearly with
the attack budget. Our second contribution is to show that using a smaller
(sub-optimal) attack budget at training time leads to a more robust performance
at test time. Based on these findings, we propose three defenses against
attacks with variable attack budgets. The first one, Adaptive AT, is a
technique where the attack budget is drawn from a distribution that is adapted
as training iterations proceed. The second, Curriculum AT, is a technique where
the attack budget is increased as training iterations proceed. The third,
Generative AT, further couples AT with a denoising generative adversarial
network to boost robust performance. Experiments on the UCF101 dataset
demonstrate that the proposed methods improve adversarial robustness against
multiple attack types.
Related papers
- Versatile Defense Against Adversarial Attacks on Image Recognition [2.9980620769521513]
Defending against adversarial attacks in a real-life setting can be compared to the way antivirus software works.
It appears that a defense method based on image-to-image translation may be capable of this.
The trained model has successfully improved the classification accuracy from nearly zero to an average of 86%.
arXiv Detail & Related papers (2024-03-13T01:48:01Z) - Securing Recommender System via Cooperative Training [78.97620275467733]
We propose a general framework, Triple Cooperative Defense (TCD), which employs three cooperative models that mutually enhance data.
Considering existing attacks struggle to balance bi-level optimization and efficiency, we revisit poisoning attacks in recommender systems.
We put forth a Game-based Co-training Attack (GCoAttack), which frames the proposed CoAttack and TCD as a game-theoretic process.
arXiv Detail & Related papers (2024-01-23T12:07:20Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - LAS-AT: Adversarial Training with Learnable Attack Strategy [82.88724890186094]
"Learnable attack strategy", dubbed LAS-AT, learns to automatically produce attack strategies to improve the model robustness.
Our framework is composed of a target network that uses AEs for training to improve robustness and a strategy network that produces attack strategies to control the AE generation.
arXiv Detail & Related papers (2022-03-13T10:21:26Z) - Projective Ranking-based GNN Evasion Attacks [52.85890533994233]
Graph neural networks (GNNs) offer promising learning methods for graph-related tasks.
GNNs are at risk of adversarial attacks.
arXiv Detail & Related papers (2022-02-25T21:52:09Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Guided Adversarial Attack for Evaluating and Enhancing Adversarial
Defenses [59.58128343334556]
We introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training.
We propose Guided Adversarial Margin Attack (GAMA), which utilizes function mapping of the clean image to guide the generation of adversaries.
We also propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses.
arXiv Detail & Related papers (2020-11-30T16:39:39Z) - Stealthy and Efficient Adversarial Attacks against Deep Reinforcement
Learning [30.46580767540506]
We introduce two novel adversarial attack techniques to emphstealthily and emphefficiently attack the Deep Reinforcement Learning agents.
The first technique is the emphcritical point attack: the adversary builds a model to predict the future environmental states and agent's actions, assesses the damage of each possible attack strategy, and selects the optimal one.
The second technique is the emphantagonist attack: the adversary automatically learns a domain-agnostic model to discover the critical moments of attacking the agent in an episode.
arXiv Detail & Related papers (2020-05-14T16:06:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.