HoughCL: Finding Better Positive Pairs in Dense Self-supervised Learning
- URL: http://arxiv.org/abs/2111.10794v1
- Date: Sun, 21 Nov 2021 11:23:12 GMT
- Title: HoughCL: Finding Better Positive Pairs in Dense Self-supervised Learning
- Authors: Yunsung Lee, Teakgyu Hong, Han-Cheol Cho, Junbum Cha, Seungryong Kim
- Abstract summary: We introduce Hough Contrastive Learning (HoughCL), a Hough space based method that enforces geometric consistency between two dense features.
Compared to previous works, our method shows better or comparable performance on dense prediction fine-tuning tasks.
- Score: 30.442474932594386
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, self-supervised methods show remarkable achievements in image-level
representation learning. Nevertheless, their image-level self-supervisions lead
the learned representation to sub-optimal for dense prediction tasks, such as
object detection, instance segmentation, etc. To tackle this issue, several
recent self-supervised learning methods have extended image-level single
embedding to pixel-level dense embeddings. Unlike image-level representation
learning, due to the spatial deformation of augmentation, it is difficult to
sample pixel-level positive pairs. Previous studies have sampled pixel-level
positive pairs using the winner-takes-all among similarity or thresholding
warped distance between dense embeddings. However, these naive methods can be
struggled by background clutter and outliers problems. In this paper, we
introduce Hough Contrastive Learning (HoughCL), a Hough space based method that
enforces geometric consistency between two dense features. HoughCL achieves
robustness against background clutter and outliers. Furthermore, compared to
baseline, our dense positive pairing method has no additional learnable
parameters and has a small extra computation cost. Compared to previous works,
our method shows better or comparable performance on dense prediction
fine-tuning tasks.
Related papers
- A Contrastive Learning Foundation Model Based on Perfectly Aligned Sample Pairs for Remote Sensing Images [18.191222010916405]
We present a novel self-supervised method called PerA, which produces all-purpose Remote Sensing features through semantically Perfectly Aligned sample pairs.<n>Our framework provides high-quality features by ensuring consistency between teacher and student.<n>We collect an unlabeled pre-training dataset, which contains about 5 million RS images.
arXiv Detail & Related papers (2025-05-26T03:12:49Z) - Contrastive Learning with Synthetic Positives [11.932323457691945]
Contrastive learning with the nearest neighbor has proved to be one of the most efficient self-supervised learning (SSL) techniques.
In this paper, we introduce a novel approach called Contrastive Learning with Synthetic Positives (NCLP)
NCLP utilizes synthetic images, generated by an unconditional diffusion model, as the additional positives to help the model learn from diverse positives.
arXiv Detail & Related papers (2024-08-30T01:47:43Z) - Multilevel Saliency-Guided Self-Supervised Learning for Image Anomaly
Detection [15.212031255539022]
Anomaly detection (AD) is a fundamental task in computer vision.
We propose CutSwap, which leverages saliency guidance to incorporate semantic cues for augmentation.
CutSwap achieves state-of-the-art AD performance on two mainstream AD benchmark datasets.
arXiv Detail & Related papers (2023-11-30T08:03:53Z) - Mix-up Self-Supervised Learning for Contrast-agnostic Applications [33.807005669824136]
We present the first mix-up self-supervised learning framework for contrast-agnostic applications.
We address the low variance across images based on cross-domain mix-up and build the pretext task based on image reconstruction and transparency prediction.
arXiv Detail & Related papers (2022-04-02T16:58:36Z) - Learning Contrastive Representation for Semantic Correspondence [150.29135856909477]
We propose a multi-level contrastive learning approach for semantic matching.
We show that image-level contrastive learning is a key component to encourage the convolutional features to find correspondence between similar objects.
arXiv Detail & Related papers (2021-09-22T18:34:14Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z) - Dense Contrastive Learning for Self-Supervised Visual Pre-Training [102.15325936477362]
We present dense contrastive learning, which implements self-supervised learning by optimizing a pairwise contrastive (dis)similarity loss at the pixel level between two views of input images.
Compared to the baseline method MoCo-v2, our method introduces negligible computation overhead (only 1% slower)
arXiv Detail & Related papers (2020-11-18T08:42:32Z) - Contrastive Rendering for Ultrasound Image Segmentation [59.23915581079123]
The lack of sharp boundaries in US images remains an inherent challenge for segmentation.
We propose a novel and effective framework to improve boundary estimation in US images.
Our proposed method outperforms state-of-the-art methods and has the potential to be used in clinical practice.
arXiv Detail & Related papers (2020-10-10T07:14:03Z) - Unsupervised Learning of Visual Features by Contrasting Cluster
Assignments [57.33699905852397]
We propose an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons.
Our method simultaneously clusters the data while enforcing consistency between cluster assignments.
Our method can be trained with large and small batches and can scale to unlimited amounts of data.
arXiv Detail & Related papers (2020-06-17T14:00:42Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.