On the transferability of adversarial examples between convex and 01
loss models
- URL: http://arxiv.org/abs/2006.07800v2
- Date: Wed, 29 Jul 2020 20:57:52 GMT
- Title: On the transferability of adversarial examples between convex and 01
loss models
- Authors: Yunzhe Xue, Meiyan Xie, Usman Roshan
- Abstract summary: We study transferability of adversarial examples between linear 01 loss and convex (hinge) loss models.
We show how the non-continuity of 01 loss makes adversaries non-transferable in a dual layer neural network.
We show that our dual layer sign activation network with 01 loss can attain robustness on par with simple convolutional networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The 01 loss gives different and more accurate boundaries than convex loss
models in the presence of outliers. Could the difference of boundaries
translate to adversarial examples that are non-transferable between 01 loss and
convex models? We explore this empirically in this paper by studying
transferability of adversarial examples between linear 01 loss and convex
(hinge) loss models, and between dual layer neural networks with sign
activation and 01 loss vs sigmoid activation and logistic loss. We first show
that white box adversarial examples do not transfer effectively between convex
and 01 loss and between 01 loss models compared to between convex models. As a
result of this non-transferability we see that convex substitute model black
box attacks are less effective on 01 loss than convex models. Interestingly we
also see that 01 loss substitute model attacks are ineffective on both convex
and 01 loss models mostly likely due to the non-uniqueness of 01 loss models.
We show intuitively by example how the presence of outliers can cause different
decision boundaries between 01 and convex loss models which in turn produces
adversaries that are non-transferable. Indeed we see on MNIST that adversaries
transfer between 01 loss and convex models more easily than on CIFAR10 and
ImageNet which are likely to contain outliers. We show intuitively by example
how the non-continuity of 01 loss makes adversaries non-transferable in a dual
layer neural network. We discretize CIFAR10 features to be more like MNIST and
find that it does not improve transferability, thus suggesting that different
boundaries due to outliers are more likely the cause of non-transferability. As
a result of this non-transferability we show that our dual layer sign
activation network with 01 loss can attain robustness on par with simple
convolutional networks.
Related papers
- Tightening the Approximation Error of Adversarial Risk with Auto Loss
Function Search [12.263913626161155]
A common type of evaluation is to approximate the adversarial risk of a model as a robustness indicator.
We propose AutoLoss-AR, the first method for searching loss functions for tightening the error.
The results demonstrate the effectiveness of the proposed methods.
arXiv Detail & Related papers (2021-11-09T11:47:43Z) - Sample Selection with Uncertainty of Losses for Learning with Noisy
Labels [145.06552420999986]
In learning with noisy labels, the sample selection approach is very popular, which regards small-loss data as correctly labeled during training.
However, losses are generated on-the-fly based on the model being trained with noisy labels, and thus large-loss data are likely but not certainly to be incorrect.
In this paper, we incorporate the uncertainty of losses by adopting interval estimation instead of point estimation of losses.
arXiv Detail & Related papers (2021-06-01T12:53:53Z) - Improving Adversarial Robustness via Probabilistically Compact Loss with
Logit Constraints [19.766374145321528]
Convolutional neural networks (CNNs) have achieved state-of-the-art performance on various tasks in computer vision.
Recent studies demonstrate that these models are vulnerable to carefully crafted adversarial samples and suffer from a significant performance drop when predicting them.
Here we offer a unique insight into the predictive behavior of CNNs that they tend to misclassify adversarial samples into the most probable false classes.
We propose a new Probabilistically Compact (PC) loss with logit constraints which can be used as a drop-in replacement for cross-entropy (CE) loss to improve CNN'
arXiv Detail & Related papers (2020-12-14T16:40:53Z) - Shaping Deep Feature Space towards Gaussian Mixture for Visual
Classification [74.48695037007306]
We propose a Gaussian mixture (GM) loss function for deep neural networks for visual classification.
With a classification margin and a likelihood regularization, the GM loss facilitates both high classification performance and accurate modeling of the feature distribution.
The proposed model can be implemented easily and efficiently without using extra trainable parameters.
arXiv Detail & Related papers (2020-11-18T03:32:27Z) - Defending against substitute model black box adversarial attacks with
the 01 loss [0.0]
We present 01 loss linear and 01 loss dual layer neural network models as a defense against substitute model black box attacks.
Our work shows that 01 loss models offer a powerful defense against substitute model black box attacks.
arXiv Detail & Related papers (2020-09-01T22:32:51Z) - Towards adversarial robustness with 01 loss neural networks [0.0]
We propose a hidden layer 01 loss neural network trained with convolutional coordinate descent as a defense against adversarial attacks in machine learning.
We compare the minimum distortion of the 01 loss network to the binarized neural network and the standard sigmoid activation network with cross-entropy loss.
Our work shows that the 01 loss network has the potential to defend against black box adversarial attacks better than convex loss and binarized networks.
arXiv Detail & Related papers (2020-08-20T18:18:49Z) - Towards Visual Distortion in Black-Box Attacks [68.61251746898323]
adversarial examples in a black-box threat model injures the original images by introducing visual distortion.
We propose a novel black-box attack approach that can directly minimize the induced distortion by learning the noise distribution of the adversarial example.
Our attack results in much lower distortion when compared to the state-of-the-art black-box attacks and achieves $100%$ success rate on InceptionV3, ResNet50 and VGG16bn.
arXiv Detail & Related papers (2020-07-21T04:42:43Z) - Calibrated Surrogate Losses for Adversarially Robust Classification [92.37268323142307]
We show that no convex surrogate loss is respect with respect to adversarial 0-1 loss when restricted to linear models.
We also show that if the underlying distribution satisfies the Massart's noise condition, convex losses can also be calibrated in the adversarial setting.
arXiv Detail & Related papers (2020-05-28T02:40:42Z) - DMT: Dynamic Mutual Training for Semi-Supervised Learning [69.17919491907296]
Self-training methods usually rely on single model prediction confidence to filter low-confidence pseudo labels.
We propose mutual training between two different models by a dynamically re-weighted loss function, called Dynamic Mutual Training.
Our experiments show that DMT achieves state-of-the-art performance in both image classification and semantic segmentation.
arXiv Detail & Related papers (2020-04-18T03:12:55Z) - Robust and On-the-fly Dataset Denoising for Image Classification [72.10311040730815]
On-the-fly Data Denoising (ODD) is robust to mislabeled examples, while introducing almost zero computational overhead compared to standard training.
ODD is able to achieve state-of-the-art results on a wide range of datasets including real-world ones such as WebVision and Clothing1M.
arXiv Detail & Related papers (2020-03-24T03:59:26Z) - Robust binary classification with the 01 loss [0.0]
We develop a coordinate descent algorithm for a linear 01 loss and a single hidden layer 01 loss neural network.
We show our algorithms to be fast and comparable in accuracy to the linear support vector machine and logistic loss single hidden layer network for binary classification.
arXiv Detail & Related papers (2020-02-09T20:41:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.