Variation and generality in encoding of syntactic anomaly information in
sentence embeddings
- URL: http://arxiv.org/abs/2111.06644v1
- Date: Fri, 12 Nov 2021 10:23:43 GMT
- Title: Variation and generality in encoding of syntactic anomaly information in
sentence embeddings
- Authors: Qinxuan Wu and Allyson Ettinger
- Abstract summary: We explore fine-grained differences in anomaly encoding by designing probing tasks that vary the hierarchical level at which anomalies occur in a sentence.
We test not only models' ability to detect a given anomaly, but also the generality of the detected anomaly signal.
Results suggest that all models encode some information supporting anomaly detection, but detection performance varies between anomalies.
- Score: 7.132368785057315
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While sentence anomalies have been applied periodically for testing in NLP,
we have yet to establish a picture of the precise status of anomaly information
in representations from NLP models. In this paper we aim to fill two primary
gaps, focusing on the domain of syntactic anomalies. First, we explore
fine-grained differences in anomaly encoding by designing probing tasks that
vary the hierarchical level at which anomalies occur in a sentence. Second, we
test not only models' ability to detect a given anomaly, but also the
generality of the detected anomaly signal, by examining transfer between
distinct anomaly types. Results suggest that all models encode some information
supporting anomaly detection, but detection performance varies between
anomalies, and only representations from more recent transformer models show
signs of generalized knowledge of anomalies. Follow-up analyses support the
notion that these models pick up on a legitimate, general notion of sentence
oddity, while coarser-grained word position information is likely also a
contributor to the observed anomaly detection.
Related papers
- Learn Suspected Anomalies from Event Prompts for Video Anomaly Detection [49.91075101563298]
A novel framework is proposed to guide the learning of suspected anomalies from event prompts.
It enables a new multi-prompt learning process to constrain the visual-semantic features across all videos.
Our proposed model outperforms most state-of-the-art methods in terms of AP or AUC.
arXiv Detail & Related papers (2024-03-02T10:42:47Z) - Anomaly Heterogeneity Learning for Open-set Supervised Anomaly Detection [26.08881235151695]
Open-set supervised anomaly detection (OSAD) aims at utilizing a few samples of anomaly classes seen during training to detect unseen anomalies.
We introduce a novel approach, namely Anomaly Heterogeneity Learning (AHL), that simulates a diverse set of heterogeneous anomaly distributions.
We show that AHL can 1) substantially enhance different state-of-the-art OSAD models in detecting seen and unseen anomalies, and 2) effectively generalize to unseen anomalies in new domains.
arXiv Detail & Related papers (2023-10-19T14:47:11Z) - Prototypical Residual Networks for Anomaly Detection and Localization [80.5730594002466]
We propose a framework called Prototypical Residual Network (PRN)
PRN learns feature residuals of varying scales and sizes between anomalous and normal patterns to accurately reconstruct the segmentation maps of anomalous regions.
We present a variety of anomaly generation strategies that consider both seen and unseen appearance variance to enlarge and diversify anomalies.
arXiv Detail & Related papers (2022-12-05T05:03:46Z) - Anomaly Detection by Leveraging Incomplete Anomalous Knowledge with
Anomaly-Aware Bidirectional GANs [15.399369134281775]
The goal of anomaly detection is to identify anomalous samples from normal ones.
In this paper, a small number of anomalies are assumed to be available at the training stage, but they are assumed to be collected only from several anomaly types.
We propose to learn a probability distribution that can not only model the normal samples, but also guarantee to assign low density values for the collected anomalies.
arXiv Detail & Related papers (2022-04-28T08:12:49Z) - Catching Both Gray and Black Swans: Open-set Supervised Anomaly
Detection [90.32910087103744]
A few labeled anomaly examples are often available in many real-world applications.
These anomaly examples provide valuable knowledge about the application-specific abnormality.
Those anomalies seen during training often do not illustrate every possible class of anomaly.
This paper tackles open-set supervised anomaly detection.
arXiv Detail & Related papers (2022-03-28T05:21:37Z) - SLA$^2$P: Self-supervised Anomaly Detection with Adversarial
Perturbation [77.71161225100927]
Anomaly detection is a fundamental yet challenging problem in machine learning.
We propose a novel and powerful framework, dubbed as SLA$2$P, for unsupervised anomaly detection.
arXiv Detail & Related papers (2021-11-25T03:53:43Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Understanding the Effect of Bias in Deep Anomaly Detection [15.83398707988473]
Anomaly detection presents a unique challenge in machine learning, due to the scarcity of labeled anomaly data.
Recent work attempts to mitigate such problems by augmenting training of deep anomaly detection models with additional labeled anomaly samples.
In this paper, we aim to understand the effect of a biased anomaly set on anomaly detection.
arXiv Detail & Related papers (2021-05-16T03:55:02Z) - Deep Weakly-supervised Anomaly Detection [118.55172352231381]
Pairwise Relation prediction Network (PReNet) learns pairwise relation features and anomaly scores.
PReNet can detect any seen/unseen abnormalities that fit the learned pairwise abnormal patterns.
Empirical results on 12 real-world datasets show that PReNet significantly outperforms nine competing methods in detecting seen and unseen anomalies.
arXiv Detail & Related papers (2019-10-30T00:40:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.