Masked Contrastive Learning for Anomaly Detection
- URL: http://arxiv.org/abs/2105.08793v1
- Date: Tue, 18 May 2021 19:27:02 GMT
- Title: Masked Contrastive Learning for Anomaly Detection
- Authors: Hyunsoo Cho, Jinseok Seol, Sang-goo Lee
- Abstract summary: We propose a task-specific variant of contrastive learning named masked contrastive learning.
We also propose a new inference method dubbed self-ensemble inference.
- Score: 10.499890749386676
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detecting anomalies is one fundamental aspect of a safety-critical software
system, however, it remains a long-standing problem. Numerous branches of works
have been proposed to alleviate the complication and have demonstrated their
efficiencies. In particular, self-supervised learning based methods are
spurring interest due to their capability of learning diverse representations
without additional labels. Among self-supervised learning tactics, contrastive
learning is one specific framework validating their superiority in various
fields, including anomaly detection. However, the primary objective of
contrastive learning is to learn task-agnostic features without any labels,
which is not entirely suited to discern anomalies. In this paper, we propose a
task-specific variant of contrastive learning named masked contrastive
learning, which is more befitted for anomaly detection. Moreover, we propose a
new inference method dubbed self-ensemble inference that further boosts
performance by leveraging the ability learned through auxiliary
self-supervision tasks. By combining our models, we can outperform previous
state-of-the-art methods by a significant margin on various benchmark datasets.
Related papers
- When Measures are Unreliable: Imperceptible Adversarial Perturbations
toward Top-$k$ Multi-Label Learning [83.8758881342346]
A novel loss function is devised to generate adversarial perturbations that could achieve both visual and measure imperceptibility.
Experiments on large-scale benchmark datasets demonstrate the superiority of our proposed method in attacking the top-$k$ multi-label systems.
arXiv Detail & Related papers (2023-07-27T13:18:47Z) - Sample-efficient Adversarial Imitation Learning [45.400080101596956]
We propose a self-supervised representation-based adversarial imitation learning method to learn state and action representations.
We show a 39% relative improvement over existing adversarial imitation learning methods on MuJoCo in a setting limited to 100 expert state-action pairs.
arXiv Detail & Related papers (2023-03-14T12:36:01Z) - Learning Common Rationale to Improve Self-Supervised Representation for
Fine-Grained Visual Recognition Problems [61.11799513362704]
We propose learning an additional screening mechanism to identify discriminative clues commonly seen across instances and classes.
We show that a common rationale detector can be learned by simply exploiting the GradCAM induced from the SSL objective.
arXiv Detail & Related papers (2023-03-03T02:07:40Z) - Continually Learning Self-Supervised Representations with Projected
Functional Regularization [39.92600544186844]
Recent self-supervised learning methods are able to learn high-quality image representations and are closing the gap with supervised methods.
These methods are unable to acquire new knowledge incrementally -- they are, in fact, mostly used only as a pre-training phase with IID data.
To prevent forgetting of previous knowledge, we propose the usage of functional regularization.
arXiv Detail & Related papers (2021-12-30T11:59:23Z) - Contrastive Continual Learning with Feature Propagation [32.70482982044965]
Continual machine learners are elaborated to commendably learn a stream of tasks with domain and class shifts among different tasks.
We propose a general feature-propagation based contrastive continual learning method which is capable of handling multiple continual learning scenarios.
arXiv Detail & Related papers (2021-12-03T04:55:28Z) - Efficient Anomaly Detection Using Self-Supervised Multi-Cue Tasks [2.9237210794416755]
We introduce novel discriminative and generative tasks which focus on different visual cues.
We present a new out-of-distribution detection function and highlight its better stability compared to other out-of-distribution detection methods.
Our model can more accurately learn highly discriminative features using these self-supervised tasks.
arXiv Detail & Related papers (2021-11-24T09:54:50Z) - A Low Rank Promoting Prior for Unsupervised Contrastive Learning [108.91406719395417]
We construct a novel probabilistic graphical model that effectively incorporates the low rank promoting prior into the framework of contrastive learning.
Our hypothesis explicitly requires that all the samples belonging to the same instance class lie on the same subspace with small dimension.
Empirical evidences show that the proposed algorithm clearly surpasses the state-of-the-art approaches on multiple benchmarks.
arXiv Detail & Related papers (2021-08-05T15:58:25Z) - Incremental False Negative Detection for Contrastive Learning [95.68120675114878]
We introduce a novel incremental false negative detection for self-supervised contrastive learning.
During contrastive learning, we discuss two strategies to explicitly remove the detected false negatives.
Our proposed method outperforms other self-supervised contrastive learning frameworks on multiple benchmarks within a limited compute.
arXiv Detail & Related papers (2021-06-07T15:29:14Z) - Low-Regret Active learning [64.36270166907788]
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training.
At the core of our work is an efficient algorithm for sleeping experts that is tailored to achieve low regret on predictable (easy) instances.
arXiv Detail & Related papers (2021-04-06T22:53:45Z) - Anomaly Detection in Video via Self-Supervised and Multi-Task Learning [113.81927544121625]
Anomaly detection in video is a challenging computer vision problem.
In this paper, we approach anomalous event detection in video through self-supervised and multi-task learning at the object level.
arXiv Detail & Related papers (2020-11-15T10:21:28Z) - A Survey on Contrastive Self-supervised Learning [0.0]
Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets.
Contrastive learning has recently become a dominant component in self-supervised learning methods for computer vision, natural language processing (NLP), and other domains.
This paper provides an extensive review of self-supervised methods that follow the contrastive approach.
arXiv Detail & Related papers (2020-10-31T21:05:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.