An Algorithm for Out-Of-Distribution Attack to Neural Network Encoder
- URL: http://arxiv.org/abs/2009.08016v4
- Date: Wed, 27 Jan 2021 17:58:34 GMT
- Title: An Algorithm for Out-Of-Distribution Attack to Neural Network Encoder
- Authors: Liang Liang, Linhai Ma, Linchen Qian, Jiasong Chen
- Abstract summary: Out-Of-Distribution (OOD) samples do not follow the distribution of training set, and therefore the predicted class labels on OOD samples become meaningless.
We show that this type of method has no theoretical guarantee and is practically breakable by our OOD Attack algorithm.
We also show that Glow likelihood-based OOD detection is breakable as well.
- Score: 1.7305469511995404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs), especially convolutional neural networks, have
achieved superior performance on image classification tasks. However, such
performance is only guaranteed if the input to a trained model is similar to
the training samples, i.e., the input follows the probability distribution of
the training set. Out-Of-Distribution (OOD) samples do not follow the
distribution of training set, and therefore the predicted class labels on OOD
samples become meaningless. Classification-based methods have been proposed for
OOD detection; however, in this study we show that this type of method has no
theoretical guarantee and is practically breakable by our OOD Attack algorithm
because of dimensionality reduction in the DNN models. We also show that Glow
likelihood-based OOD detection is breakable as well.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.