Open-set Adversarial Defense
- URL: http://arxiv.org/abs/2009.00814v1
- Date: Wed, 2 Sep 2020 04:35:33 GMT
- Title: Open-set Adversarial Defense
- Authors: Rui Shao and Pramuditha Perera and Pong C. Yuen and Vishal M. Patel
- Abstract summary: We show that open-set recognition systems are vulnerable to adversarial attacks.
Motivated by this observation, we emphasize the need of an Open-Set Adrial Defense (OSAD) mechanism.
This paper proposes an Open-Set Defense Network (OSDN) as a solution to the OSAD problem.
- Score: 93.25058425356694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Open-set recognition and adversarial defense study two key aspects of deep
learning that are vital for real-world deployment. The objective of open-set
recognition is to identify samples from open-set classes during testing, while
adversarial defense aims to defend the network against images with
imperceptible adversarial perturbations. In this paper, we show that open-set
recognition systems are vulnerable to adversarial attacks. Furthermore, we show
that adversarial defense mechanisms trained on known classes do not generalize
well to open-set samples. Motivated by this observation, we emphasize the need
of an Open-Set Adversarial Defense (OSAD) mechanism. This paper proposes an
Open-Set Defense Network (OSDN) as a solution to the OSAD problem. The proposed
network uses an encoder with feature-denoising layers coupled with a classifier
to learn a noise-free latent feature representation. Two techniques are
employed to obtain an informative latent feature space with the objective of
improving open-set performance. First, a decoder is used to ensure that clean
images can be reconstructed from the obtained latent features. Then,
self-supervision is used to ensure that the latent features are informative
enough to carry out an auxiliary task. We introduce a testing protocol to
evaluate OSAD performance and show the effectiveness of the proposed method in
multiple object classification datasets. The implementation code of the
proposed method is available at: https://github.com/rshaojimmy/ECCV2020-OSAD.
Related papers
- Improving Adversarial Robustness via Decoupled Visual Representation Masking [65.73203518658224]
In this paper, we highlight two novel properties of robust features from the feature distribution perspective.
We find that state-of-the-art defense methods aim to address both of these mentioned issues well.
Specifically, we propose a simple but effective defense based on decoupled visual representation masking.
arXiv Detail & Related papers (2024-06-16T13:29:41Z) - Open-World Object Detection via Discriminative Class Prototype Learning [4.055884768256164]
Open-world object detection (OWOD) is a challenging problem that combines object detection with incremental learning and open-set learning.
We propose a novel and efficient OWOD solution from a prototype perspective, which we call OCPL: Open-world object detection via discnative OCPL: Open-world object detection via discriminative OCPL: Open-world object detection via discriminative OCPL: Open-world object detection via discriminative OCPL: Open-world object detection via discriminative OCPL: Open-world object detection via discriminative OCPL: Open-world object detection via
arXiv Detail & Related papers (2023-02-23T03:05:04Z) - Open-Set Object Detection Using Classification-free Object Proposal and
Instance-level Contrastive Learning [25.935629339091697]
Open-set object detection (OSOD) is a promising direction to handle the problem consisting of two subtasks: objects and background separation, and open-set object classification.
We present Openset RCNN to address the challenging OSOD.
We show that our Openset RCNN can endow the robot with an open-set perception ability to support robotic rearrangement tasks in cluttered environments.
arXiv Detail & Related papers (2022-11-21T15:00:04Z) - Open-set Adversarial Defense with Clean-Adversarial Mutual Learning [93.25058425356694]
This paper demonstrates that open-set recognition systems are vulnerable to adversarial samples.
Motivated by these observations, we emphasize the necessity of an Open-Set Adversarial Defense (OSAD) mechanism.
This paper proposes an Open-Set Defense Network with Clean-Adversarial Mutual Learning (OSDN-CAML) as a solution to the OSAD problem.
arXiv Detail & Related papers (2022-02-12T02:13:55Z) - Exploring Robustness of Unsupervised Domain Adaptation in Semantic
Segmentation [74.05906222376608]
We propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space.
This paper is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks.
arXiv Detail & Related papers (2021-05-23T01:50:44Z) - DAFAR: Detecting Adversaries by Feedback-Autoencoder Reconstruction [7.867922462470315]
DAFAR allows deep learning models to detect adversarial examples in high accuracy and universality.
It transforms imperceptible-perturbation attack on the target network directly into obvious reconstruction-error attack on the feedback autoencoder.
Experiments show that DAFAR is effective against popular and arguably most advanced attacks without losing performance on legitimate samples.
arXiv Detail & Related papers (2021-03-11T06:18:50Z) - Detection of Adversarial Supports in Few-shot Classifiers Using Feature
Preserving Autoencoders and Self-Similarity [89.26308254637702]
We propose a detection strategy to highlight adversarial support sets.
We make use of feature preserving autoencoder filtering and also the concept of self-similarity of a support set to perform this detection.
Our method is attack-agnostic and also the first to explore detection for few-shot classifiers to the best of our knowledge.
arXiv Detail & Related papers (2020-12-09T14:13:41Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.