Defensive Few-shot Learning
- URL: http://arxiv.org/abs/1911.06968v2
- Date: Fri, 25 Aug 2023 11:19:54 GMT
- Title: Defensive Few-shot Learning
- Authors: Wenbin Li, Lei Wang, Xingxing Zhang, Lei Qi, Jing Huo, Yang Gao and
Jiebo Luo
- Abstract summary: This paper investigates a new challenging problem called defensive few-shot learning.
It aims to learn a robust few-shot model against adversarial attacks.
The proposed framework can effectively make the existing few-shot models robust against adversarial attacks.
- Score: 77.82113573388133
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates a new challenging problem called defensive few-shot
learning in order to learn a robust few-shot model against adversarial attacks.
Simply applying the existing adversarial defense methods to few-shot learning
cannot effectively solve this problem. This is because the commonly assumed
sample-level distribution consistency between the training and test sets can no
longer be met in the few-shot setting. To address this situation, we develop a
general defensive few-shot learning (DFSL) framework to answer the following
two key questions: (1) how to transfer adversarial defense knowledge from one
sample distribution to another? (2) how to narrow the distribution gap between
clean and adversarial examples under the few-shot setting? To answer the first
question, we propose an episode-based adversarial training mechanism by
assuming a task-level distribution consistency to better transfer the
adversarial defense knowledge. As for the second question, within each few-shot
task, we design two kinds of distribution consistency criteria to narrow the
distribution gap between clean and adversarial examples from the feature-wise
and prediction-wise perspectives, respectively. Extensive experiments
demonstrate that the proposed framework can effectively make the existing
few-shot models robust against adversarial attacks. Code is available at
https://github.com/WenbinLee/DefensiveFSL.git.
Related papers
- Improving Adversarial Robustness via Decoupled Visual Representation Masking [65.73203518658224]
In this paper, we highlight two novel properties of robust features from the feature distribution perspective.
We find that state-of-the-art defense methods aim to address both of these mentioned issues well.
Specifically, we propose a simple but effective defense based on decoupled visual representation masking.
arXiv Detail & Related papers (2024-06-16T13:29:41Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Understanding and Improving Ensemble Adversarial Defense [4.504026914523449]
We develop a new error theory dedicated to understanding ensemble adversarial defense.
We propose an effective approach to improve ensemble adversarial defense, named interactive global adversarial training (iGAT)
iGAT is capable of boosting their performance by increases up to 17% evaluated using CIFAR10 and CIFAR100 datasets under both white-box and black-box attacks.
arXiv Detail & Related papers (2023-10-27T20:43:29Z) - Among Us: Adversarially Robust Collaborative Perception by Consensus [50.73128191202585]
Multiple robots could perceive a scene (e.g., detect objects) collaboratively better than individuals.
We propose ROBOSAC, a novel sampling-based defense strategy generalizable to unseen attackers.
We validate our method on the task of collaborative 3D object detection in autonomous driving scenarios.
arXiv Detail & Related papers (2023-03-16T17:15:25Z) - Scalable Attribution of Adversarial Attacks via Multi-Task Learning [11.302242821058865]
Adversarial Attribution Problem (AAP) is used to generate adversarial examples.
We propose a multi-task learning framework named Multi-Task Adversarial Attribution (MTAA) to recognize the three signatures simultaneously.
arXiv Detail & Related papers (2023-02-25T12:27:44Z) - Measuring Equality in Machine Learning Security Defenses: A Case Study
in Speech Recognition [56.69875958980474]
This work considers approaches to defending learned systems and how security defenses result in performance inequities across different sub-populations.
We find that many methods that have been proposed can cause direct harm, like false rejection and unequal benefits from robustness training.
We present a comparison of equality between two rejection-based defenses: randomized smoothing and neural rejection, finding randomized smoothing more equitable due to the sampling mechanism for minority groups.
arXiv Detail & Related papers (2023-02-17T16:19:26Z) - On the Limitations of Stochastic Pre-processing Defenses [42.80542472276451]
Defending against adversarial examples remains an open problem.
A common belief is that randomness at inference increases the cost of finding adversarial inputs.
In this paper, we investigate such pre-processing defenses and demonstrate that they are flawed.
arXiv Detail & Related papers (2022-06-19T21:54:42Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.