Understanding, Detecting, and Separating Out-of-Distribution Samples and
Adversarial Samples in Text Classification
- URL: http://arxiv.org/abs/2204.04458v1
- Date: Sat, 9 Apr 2022 12:11:59 GMT
- Title: Understanding, Detecting, and Separating Out-of-Distribution Samples and
Adversarial Samples in Text Classification
- Authors: Cheng-Han Chiang and Hung-yi Lee
- Abstract summary: We compare the two types of anomalies (OOD and Adv samples) with the in-distribution (ID) ones from three aspects.
We find that OOD samples expose their aberration starting from the first layer, while the abnormalities of Adv samples do not emerge until the deeper layers of the model.
We propose a simple method to separate ID, OOD, and Adv samples using the hidden representations and output probabilities of the model.
- Score: 80.81532239566992
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we study the differences and commonalities between
statistically out-of-distribution (OOD) samples and adversarial (Adv) samples,
both of which hurting a text classification model's performance. We conduct
analyses to compare the two types of anomalies (OOD and Adv samples) with the
in-distribution (ID) ones from three aspects: the input features, the hidden
representations in each layer of the model, and the output probability
distributions of the classifier. We find that OOD samples expose their
aberration starting from the first layer, while the abnormalities of Adv
samples do not emerge until the deeper layers of the model. We also illustrate
that the models' output probabilities for Adv samples tend to be more
unconfident. Based on our observations, we propose a simple method to separate
ID, OOD, and Adv samples using the hidden representations and output
probabilities of the model. On multiple combinations of ID, OOD datasets, and
Adv attacks, our proposed method shows exceptional results on distinguishing
ID, OOD, and Adv samples.
Related papers
- Out-of-Distribution Detection with a Single Unconditional Diffusion Model [54.15132801131365]
Out-of-distribution (OOD) detection is a critical task in machine learning that seeks to identify abnormal samples.
Traditionally, unsupervised methods utilize a deep generative model for OOD detection.
This paper explores whether a single model can perform OOD detection across diverse tasks.
arXiv Detail & Related papers (2024-05-20T08:54:03Z) - Pseudo Outlier Exposure for Out-of-Distribution Detection using
Pretrained Transformers [3.8839179829686126]
A rejection network can be trained with ID and diverse outlier samples to detect test OOD samples.
We propose a method called Pseudo Outlier Exposure (POE) that constructs a surrogate OOD dataset by sequentially masking tokens related to ID classes.
Our method does not require any external OOD data and can be easily implemented within off-the-shelf Transformers.
arXiv Detail & Related papers (2023-07-18T17:29:23Z) - Detecting Adversarial Data by Probing Multiple Perturbations Using
Expected Perturbation Score [62.54911162109439]
Adversarial detection aims to determine whether a given sample is an adversarial one based on the discrepancy between natural and adversarial distributions.
We propose a new statistic called expected perturbation score (EPS), which is essentially the expected score of a sample after various perturbations.
We develop EPS-based maximum mean discrepancy (MMD) as a metric to measure the discrepancy between the test sample and natural samples.
arXiv Detail & Related papers (2023-05-25T13:14:58Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - Towards Robust Visual Question Answering: Making the Most of Biased
Samples via Contrastive Learning [54.61762276179205]
We propose a novel contrastive learning approach, MMBS, for building robust VQA models by Making the Most of Biased Samples.
Specifically, we construct positive samples for contrastive learning by eliminating the information related to spurious correlation from the original training samples.
We validate our contributions by achieving competitive performance on the OOD dataset VQA-CP v2 while preserving robust performance on the ID dataset VQA v2.
arXiv Detail & Related papers (2022-10-10T11:05:21Z) - CADet: Fully Self-Supervised Out-Of-Distribution Detection With
Contrastive Learning [11.897976063005315]
This work explores the use of self-supervised contrastive learning to the simultaneous detection of two types of OOD samples.
First, we pair self-supervised contrastive learning with the maximum mean discrepancy (MMD) two-sample test.
Motivated by this success, we introduce CADet, a novel method for OOD detection of single samples.
arXiv Detail & Related papers (2022-10-04T17:02:37Z) - ReSmooth: Detecting and Utilizing OOD Samples when Training with Data
Augmentation [57.38418881020046]
Recent DA techniques always meet the need for diversity in augmented training samples.
An augmentation strategy that has a high diversity usually introduces out-of-distribution (OOD) augmented samples.
We propose ReSmooth, a framework that firstly detects OOD samples in augmented samples and then leverages them.
arXiv Detail & Related papers (2022-05-25T09:29:27Z) - WOOD: Wasserstein-based Out-of-Distribution Detection [6.163329453024915]
Training data for deep-neural-network-based classifiers are usually assumed to be sampled from the same distribution.
When part of the test samples are drawn from a distribution that is far away from that of the training samples, the trained neural network has a tendency to make high confidence predictions for these OOD samples.
We propose a Wasserstein-based out-of-distribution detection (WOOD) method to overcome these challenges.
arXiv Detail & Related papers (2021-12-13T02:35:15Z) - Lightweight Detection of Out-of-Distribution and Adversarial Samples via
Channel Mean Discrepancy [14.103271496247551]
We introduce Channel Mean Discrepancy (CMD), a model-agnostic distance metric for evaluating the statistics of features extracted by classification models.
We experimentally demonstrate that CMD magnitude is significantly smaller for legitimate samples than for OOD and adversarial samples.
Preliminary results show that our simple yet effective method outperforms several state-of-the-art approaches to detecting OOD and adversarial samples.
arXiv Detail & Related papers (2021-04-23T04:15:53Z) - Bridging In- and Out-of-distribution Samples for Their Better
Discriminability [18.84265231678354]
We consider samples lying in the intermediate of the two and use them for training a network.
We generate such samples using multiple image transformations that corrupt inputs in various ways and with different severity levels.
We estimate where the generated samples by a single image transformation lie between ID and OOD using a network trained on clean ID samples.
arXiv Detail & Related papers (2021-01-07T11:34:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.