Doubly Contrastive Deep Clustering
- URL: http://arxiv.org/abs/2103.05484v1
- Date: Tue, 9 Mar 2021 15:15:32 GMT
- Title: Doubly Contrastive Deep Clustering
- Authors: Zhiyuan Dang, Cheng Deng, Xu Yang, Heng Huang
- Abstract summary: We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
- Score: 135.7001508427597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep clustering successfully provides more effective features than
conventional ones and thus becomes an important technique in current
unsupervised learning. However, most deep clustering methods ignore the vital
positive and negative pairs introduced by data augmentation and further the
significance of contrastive learning, which leads to suboptimal performance. In
this paper, we present a novel Doubly Contrastive Deep Clustering (DCDC)
framework, which constructs contrastive loss over both sample and class views
to obtain more discriminative features and competitive results. Specifically,
for the sample view, we set the class distribution of the original sample and
its augmented version as positive sample pairs and set one of the other
augmented samples as negative sample pairs. After that, we can adopt the
sample-wise contrastive loss to pull positive sample pairs together and push
negative sample pairs apart. Similarly, for the class view, we build the
positive and negative pairs from the sample distribution of the class. In this
way, two contrastive losses successfully constrain the clustering results of
mini-batch samples in both sample and class level. Extensive experimental
results on six benchmark datasets demonstrate the superiority of our proposed
model against state-of-the-art methods. Particularly in the challenging dataset
Tiny-ImageNet, our method leads 5.6\% against the latest comparison method. Our
code will be available at \url{https://github.com/ZhiyuanDang/DCDC}.
Related papers
- Synthetic Hard Negative Samples for Contrastive Learning [8.776888865665024]
This paper proposes a novel feature-level method, namely sampling synthetic hard negative samples for contrastive learning (SSCL)
We generate more and harder negative samples by mixing negative samples, and then sample them by controlling the contrast of anchor sample with the other negative samples.
Our proposed method improves the classification performance on different image datasets and can be readily integrated into existing methods.
arXiv Detail & Related papers (2023-04-06T09:54:35Z) - Cluster-guided Contrastive Graph Clustering Network [53.16233290797777]
We propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC)
We construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks.
To construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples.
arXiv Detail & Related papers (2023-01-03T13:42:38Z) - Neighborhood Contrastive Learning for Novel Class Discovery [79.14767688903028]
We build a new framework, named Neighborhood Contrastive Learning, to learn discriminative representations that are important to clustering performance.
We experimentally demonstrate that these two ingredients significantly contribute to clustering performance and lead our model to outperform state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2021-06-20T17:34:55Z) - Contrastive Attraction and Contrastive Repulsion for Representation
Learning [131.72147978462348]
Contrastive learning (CL) methods learn data representations in a self-supervision manner, where the encoder contrasts each positive sample over multiple negative samples.
Recent CL methods have achieved promising results when pretrained on large-scale datasets, such as ImageNet.
We propose a doubly CL strategy that separately compares positive and negative samples within their own groups, and then proceeds with a contrast between positive and negative groups.
arXiv Detail & Related papers (2021-05-08T17:25:08Z) - Solving Inefficiency of Self-supervised Representation Learning [87.30876679780532]
Existing contrastive learning methods suffer from very low learning efficiency.
Under-clustering and over-clustering problems are major obstacles to learning efficiency.
We propose a novel self-supervised learning framework using a median triplet loss.
arXiv Detail & Related papers (2021-04-18T07:47:10Z) - Contrastive Learning with Hard Negative Samples [80.12117639845678]
We develop a new family of unsupervised sampling methods for selecting hard negative samples.
A limiting case of this sampling results in a representation that tightly clusters each class, and pushes different classes as far apart as possible.
The proposed method improves downstream performance across multiple modalities, requires only few additional lines of code to implement, and introduces no computational overhead.
arXiv Detail & Related papers (2020-10-09T14:18:53Z) - Conditional Negative Sampling for Contrastive Learning of Visual
Representations [19.136685699971864]
We show that choosing difficult negatives, or those more similar to the current instance, can yield stronger representations.
We introduce a family of mutual information estimators that sample negatives conditionally -- in a "ring" around each positive.
We prove that these estimators lower-bound mutual information, with higher bias but lower variance than NCE.
arXiv Detail & Related papers (2020-10-05T14:17:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.