Towards Robust Fine-grained Recognition by Maximal Separation of
Discriminative Features
- URL: http://arxiv.org/abs/2006.06028v1
- Date: Wed, 10 Jun 2020 18:34:45 GMT
- Title: Towards Robust Fine-grained Recognition by Maximal Separation of
Discriminative Features
- Authors: Krishna Kanth Nakka and Mathieu Salzmann
- Abstract summary: We identify the proximity of the latent representations of different classes in fine-grained recognition networks as a key factor to the success of adversarial attacks.
We introduce an attention-based regularization mechanism that maximally separates the discriminative latent features of different classes.
- Score: 72.72840552588134
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks have been widely studied for general classification
tasks, but remain unexplored in the context of fine-grained recognition, where
the inter-class similarities facilitate the attacker's task. In this paper, we
identify the proximity of the latent representations of different classes in
fine-grained recognition networks as a key factor to the success of adversarial
attacks. We therefore introduce an attention-based regularization mechanism
that maximally separates the discriminative latent features of different
classes while minimizing the contribution of the non-discriminative regions to
the final class prediction. As evidenced by our experiments, this allows us to
significantly improve robustness to adversarial attacks, to the point of
matching or even surpassing that of adversarial training, but without requiring
access to adversarial samples.
Related papers
- Unilaterally Aggregated Contrastive Learning with Hierarchical
Augmentation for Anomaly Detection [64.50126371767476]
We propose Unilaterally Aggregated Contrastive Learning with Hierarchical Augmentation (UniCon-HA)
We explicitly encourage the concentration of inliers and the dispersion of virtual outliers via supervised and unsupervised contrastive losses.
Our method is evaluated under three AD settings including unlabeled one-class, unlabeled multi-class, and labeled multi-class.
arXiv Detail & Related papers (2023-08-20T04:01:50Z) - Improving Adversarial Robustness with Self-Paced Hard-Class Pair
Reweighting [5.084323778393556]
adversarial training with untargeted attacks is one of the most recognized methods.
We find that the naturally imbalanced inter-class semantic similarity makes those hard-class pairs to become the virtual targets of each other.
We propose to upweight hard-class pair loss in model optimization, which prompts learning discriminative features from hard classes.
arXiv Detail & Related papers (2022-10-26T22:51:36Z) - Improving Adversarial Robustness via Joint Classification and Multiple
Explicit Detection Classes [11.584771636861877]
We show that a provable framework can benefit by extension to networks with multiple explicit abstain classes.
We propose a regularization approach and a training method to counter this degeneracy by promoting full use of the multiple abstain classes.
arXiv Detail & Related papers (2022-10-26T01:23:33Z) - Towards Fair Classification against Poisoning Attacks [52.57443558122475]
We study the poisoning scenario where the attacker can insert a small fraction of samples into training data.
We propose a general and theoretically guaranteed framework which accommodates traditional defense methods to fair classification against poisoning attacks.
arXiv Detail & Related papers (2022-10-18T00:49:58Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - Attack Transferability Characterization for Adversarially Robust
Multi-label Classification [37.00606062677375]
This study focuses on non-targeted evasion attack against multi-label classifiers.
The goal of the threat is to cause miss-classification with respect to as many labels as possible.
We unveil how the transferability level of the attack determines the attackability of the classifier.
arXiv Detail & Related papers (2021-06-29T12:50:20Z) - Deep Clustering by Semantic Contrastive Learning [67.28140787010447]
We introduce a novel variant called Semantic Contrastive Learning (SCL)
It explores the characteristics of both conventional contrastive learning and deep clustering.
It can amplify the strengths of contrastive learning and deep clustering in a unified approach.
arXiv Detail & Related papers (2021-03-03T20:20:48Z) - Learning to Separate Clusters of Adversarial Representations for Robust
Adversarial Detection [50.03939695025513]
We propose a new probabilistic adversarial detector motivated by a recently introduced non-robust feature.
In this paper, we consider the non-robust features as a common property of adversarial examples, and we deduce it is possible to find a cluster in representation space corresponding to the property.
This idea leads us to probability estimate distribution of adversarial representations in a separate cluster, and leverage the distribution for a likelihood based adversarial detector.
arXiv Detail & Related papers (2020-12-07T07:21:18Z) - Beyond cross-entropy: learning highly separable feature distributions
for robust and accurate classification [22.806324361016863]
We propose a novel approach for training deep robust multiclass classifiers that provides adversarial robustness.
We show that the regularization of the latent space based on our approach yields excellent classification accuracy.
arXiv Detail & Related papers (2020-10-29T11:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.