Cluster-aware Contrastive Learning for Unsupervised Out-of-distribution
Detection
- URL: http://arxiv.org/abs/2302.02598v1
- Date: Mon, 6 Feb 2023 07:21:03 GMT
- Title: Cluster-aware Contrastive Learning for Unsupervised Out-of-distribution
Detection
- Authors: Menglong Chen, Xingtai Gui, Shicai Fan
- Abstract summary: Unsupervised out-of-distribution (OOD) Detection aims to separate the samples falling outside the distribution of training data without label information.
We propose Cluster-aware Contrastive Learning (CCL) framework for unsupervised OOD detection, which considers both instance-level and semantic-level information.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised out-of-distribution (OOD) Detection aims to separate the samples
falling outside the distribution of training data without label information.
Among numerous branches, contrastive learning has shown its excellent
capability of learning discriminative representation in OOD detection. However,
for its limited vision, merely focusing on instance-level relationship between
augmented samples, it lacks attention to the relationship between samples with
same semantics. Based on the classic contrastive learning, we propose
Cluster-aware Contrastive Learning (CCL) framework for unsupervised OOD
detection, which considers both instance-level and semantic-level information.
Specifically, we study a cooperation strategy of clustering and contrastive
learning to effectively extract the latent semantics and design a cluster-aware
contrastive loss function to enhance OOD discriminative ability. The loss
function can simultaneously pay attention to the global and local relationships
by treating both the cluster centers and the samples belonging to the same
cluster as positive samples. We conducted sufficient experiments to verify the
effectiveness of our framework and the model achieves significant improvement
on various image benchmarks.
Related papers
- Cluster-guided Contrastive Graph Clustering Network [53.16233290797777]
We propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC)
We construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks.
To construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples.
arXiv Detail & Related papers (2023-01-03T13:42:38Z) - Understanding the properties and limitations of contrastive learning for
Out-of-Distribution detection [3.2689702143620143]
A popular approach to out-of-distribution (OOD) detection is based on a self-supervised learning technique referred to as contrastive learning.
In this paper, we aim to understand the effectiveness and limitation of existing contrastive learning methods for OOD detection.
arXiv Detail & Related papers (2022-11-06T17:33:29Z) - Beyond Instance Discrimination: Relation-aware Contrastive
Self-supervised Learning [75.46664770669949]
We present relation-aware contrastive self-supervised learning (ReCo) to integrate instance relations.
Our ReCo consistently gains remarkable performance improvements.
arXiv Detail & Related papers (2022-11-02T03:25:28Z) - Hybrid Dynamic Contrast and Probability Distillation for Unsupervised
Person Re-Id [109.1730454118532]
Unsupervised person re-identification (Re-Id) has attracted increasing attention due to its practical application in the read-world video surveillance system.
We present the hybrid dynamic cluster contrast and probability distillation algorithm.
It formulates the unsupervised Re-Id problem into an unified local-to-global dynamic contrastive learning and self-supervised probability distillation framework.
arXiv Detail & Related papers (2021-09-29T02:56:45Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - Deep Clustering based Fair Outlier Detection [19.601280507914325]
We propose an instance-level weighted representation learning strategy to enhance the joint deep clustering and outlier detection.
Our DCFOD method consistently achieves superior performance on both the outlier detection validity and two types of fairness notions in outlier detection.
arXiv Detail & Related papers (2021-06-09T15:12:26Z) - Incremental False Negative Detection for Contrastive Learning [95.68120675114878]
We introduce a novel incremental false negative detection for self-supervised contrastive learning.
During contrastive learning, we discuss two strategies to explicitly remove the detected false negatives.
Our proposed method outperforms other self-supervised contrastive learning frameworks on multiple benchmarks within a limited compute.
arXiv Detail & Related papers (2021-06-07T15:29:14Z) - Modeling Discriminative Representations for Out-of-Domain Detection with
Supervised Contrastive Learning [16.77134235390429]
Key challenge of OOD detection is to learn discriminative semantic features.
We propose a supervised contrastive learning objective to minimize intra-class variance.
We employ an adversarial augmentation mechanism to obtain pseudo diverse views of a sample.
arXiv Detail & Related papers (2021-05-29T12:54:22Z) - Solving Inefficiency of Self-supervised Representation Learning [87.30876679780532]
Existing contrastive learning methods suffer from very low learning efficiency.
Under-clustering and over-clustering problems are major obstacles to learning efficiency.
We propose a novel self-supervised learning framework using a median triplet loss.
arXiv Detail & Related papers (2021-04-18T07:47:10Z) - On Mutual Information in Contrastive Learning for Visual Representations [19.136685699971864]
unsupervised, "contrastive" learning algorithms in vision have been shown to learn representations that perform remarkably well on transfer tasks.
We show that this family of algorithms maximizes a lower bound on the mutual information between two or more "views" of an image.
We find that the choice of negative samples and views are critical to the success of these algorithms.
arXiv Detail & Related papers (2020-05-27T04:21:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.