Do We Really Need to Learn Representations from In-domain Data for
Outlier Detection?
- URL: http://arxiv.org/abs/2105.09270v1
- Date: Wed, 19 May 2021 17:30:28 GMT
- Title: Do We Really Need to Learn Representations from In-domain Data for
Outlier Detection?
- Authors: Zhisheng Xiao, Qing Yan, Yali Amit
- Abstract summary: Methods based on the two-stage framework achieve state-of-the-art performance on this task.
We explore the possibility of avoiding the high cost of training a distinct representation for each outlier detection task.
In experiments, we demonstrate competitive or better performance on a variety of outlier detection benchmarks compared with previous two-stage methods.
- Score: 6.445605125467574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised outlier detection, which predicts if a test sample is an outlier
or not using only the information from unlabelled inlier data, is an important
but challenging task. Recently, methods based on the two-stage framework
achieve state-of-the-art performance on this task. The framework leverages
self-supervised representation learning algorithms to train a feature extractor
on inlier data, and applies a simple outlier detector in the feature space. In
this paper, we explore the possibility of avoiding the high cost of training a
distinct representation for each outlier detection task, and instead using a
single pre-trained network as the universal feature extractor regardless of the
source of in-domain data. In particular, we replace the task-specific feature
extractor by one network pre-trained on ImageNet with a self-supervised loss.
In experiments, we demonstrate competitive or better performance on a variety
of outlier detection benchmarks compared with previous two-stage methods,
suggesting that learning representations from in-domain data may be unnecessary
for outlier detection.
Related papers
- Unsupervised Event Outlier Detection in Continuous Time [4.375463200687156]
We develop, to the best our knowledge, the first unsupervised outlier detection approach to detecting abnormal events.
We train a 'generator' that corrects outliers in the data with a 'discriminator' that learns to discriminate the corrected data from the real data.
The experimental results show that our method can detect event outliers more accurately than the state-of-the-art approaches.
arXiv Detail & Related papers (2024-11-25T14:29:39Z) - Quantile-based Maximum Likelihood Training for Outlier Detection [5.902139925693801]
We introduce a quantile-based maximum likelihood objective for learning the inlier distribution to improve the outlier separation during inference.
Our approach fits a normalizing flow to pre-trained discriminative features and detects the outliers according to the evaluated log-likelihood.
arXiv Detail & Related papers (2023-08-20T22:27:54Z) - Label-Efficient Object Detection via Region Proposal Network
Pre-Training [58.50615557874024]
We propose a simple pretext task that provides an effective pre-training for the region proposal network (RPN)
In comparison with multi-stage detectors without RPN pre-training, our approach is able to consistently improve downstream task performance.
arXiv Detail & Related papers (2022-11-16T16:28:18Z) - Meta-Learning for Unsupervised Outlier Detection with Optimal Transport [4.035753155957698]
We propose a novel approach to automate outlier detection based on meta-learning from previous datasets with outliers.
We leverage optimal transport in particular, to find the dataset with the most similar underlying distribution, and then apply the outlier detection techniques that proved to work best for that data distribution.
arXiv Detail & Related papers (2022-11-01T10:36:48Z) - Feature Representation Learning for Unsupervised Cross-domain Image
Retrieval [73.3152060987961]
Current supervised cross-domain image retrieval methods can achieve excellent performance.
The cost of data collection and labeling imposes an intractable barrier to practical deployment in real applications.
We introduce a new cluster-wise contrastive learning mechanism to help extract class semantic-aware features.
arXiv Detail & Related papers (2022-07-20T07:52:14Z) - CaSP: Class-agnostic Semi-Supervised Pretraining for Detection and
Segmentation [60.28924281991539]
We propose a novel Class-agnostic Semi-supervised Pretraining (CaSP) framework to achieve a more favorable task-specificity balance.
Using 3.6M unlabeled data, we achieve a remarkable performance gain of 4.7% over ImageNet-pretrained baseline on object detection.
arXiv Detail & Related papers (2021-12-09T14:54:59Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - Unsupervised Outlier Detection using Memory and Contrastive Learning [53.77693158251706]
We think outlier detection can be done in the feature space by measuring the feature distance between outliers and inliers.
We propose a framework, MCOD, using a memory module and a contrastive learning module.
Our proposed MCOD achieves a considerable performance and outperforms nine state-of-the-art methods.
arXiv Detail & Related papers (2021-07-27T07:35:42Z) - Out-of-Scope Intent Detection with Self-Supervision and Discriminative
Training [20.242645823965145]
Out-of-scope intent detection is of practical importance in task-oriented dialogue systems.
We propose a method to train an out-of-scope intent classifier in a fully end-to-end manner by simulating the test scenario in training.
We evaluate our method extensively on four benchmark dialogue datasets and observe significant improvements over state-of-the-art approaches.
arXiv Detail & Related papers (2021-06-16T08:17:18Z) - Homophily Outlier Detection in Non-IID Categorical Data [43.51919113927003]
This work introduces a novel outlier detection framework and its two instances to identify outliers in categorical data.
It first defines and incorporates distribution-sensitive outlier factors and their interdependence into a value-value graph-based representation.
The learned value outlierness allows for either direct outlier detection or outlying feature selection.
arXiv Detail & Related papers (2021-03-21T23:29:33Z) - Evolving Losses for Unsupervised Video Representation Learning [91.2683362199263]
We present a new method to learn video representations from large-scale unlabeled video data.
The proposed unsupervised representation learning results in a single RGB network and outperforms previous methods.
arXiv Detail & Related papers (2020-02-26T16:56:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.