SUVR: A Search-based Approach to Unsupervised Visual Representation
Learning
- URL: http://arxiv.org/abs/2305.14754v1
- Date: Wed, 24 May 2023 05:57:58 GMT
- Title: SUVR: A Search-based Approach to Unsupervised Visual Representation
Learning
- Authors: Yi-Zhan Xu, Chih-Yao Chen, Cheng-Te Li
- Abstract summary: We argue that image pairs should have varying degrees of similarity, and the negative samples should be allowed to be drawn from the entire dataset.
In this work, we propose Search-based Unsupervised Visual Learning (SUVR) to learn better image representations in an unsupervised manner.
- Score: 11.602089225841631
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised learning has grown in popularity because of the difficulty of
collecting annotated data and the development of modern frameworks that allow
us to learn from unlabeled data. Existing studies, however, either disregard
variations at different levels of similarity or only consider negative samples
from one batch. We argue that image pairs should have varying degrees of
similarity, and the negative samples should be allowed to be drawn from the
entire dataset. In this work, we propose Search-based Unsupervised Visual
Representation Learning (SUVR) to learn better image representations in an
unsupervised manner. We first construct a graph from the image dataset by the
similarity between images, and adopt the concept of graph traversal to explore
positive samples. In the meantime, we make sure that negative samples can be
drawn from the full dataset. Quantitative experiments on five benchmark image
classification datasets demonstrate that SUVR can significantly outperform
strong competing methods on unsupervised embedding learning. Qualitative
experiments also show that SUVR can produce better representations in which
similar images are clustered closer together than unrelated images in the
latent space.
Related papers
- InfoNCE Loss Provably Learns Cluster-Preserving Representations [54.28112623495274]
We show that the representation learned by InfoNCE with a finite number of negative samples is consistent with respect to clusters in the data.
Our main result is to show that the representation learned by InfoNCE with a finite number of negative samples is also consistent with respect to clusters in the data.
arXiv Detail & Related papers (2023-02-15T19:45:35Z) - ACTIVE:Augmentation-Free Graph Contrastive Learning for Partial
Multi-View Clustering [52.491074276133325]
We propose an augmentation-free graph contrastive learning framework to solve the problem of partial multi-view clustering.
The proposed approach elevates instance-level contrastive learning and missing data inference to the cluster-level, effectively mitigating the impact of individual missing data on clustering.
arXiv Detail & Related papers (2022-03-01T02:32:25Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z) - Focus on the Positives: Self-Supervised Learning for Biodiversity
Monitoring [9.086207853136054]
We address the problem of learning self-supervised representations from unlabeled image collections.
We exploit readily available context data that encodes information such as the spatial and temporal relationships between the input images.
For the critical task of global biodiversity monitoring, this results in image features that can be adapted to challenging visual species classification tasks with limited human supervision.
arXiv Detail & Related papers (2021-08-14T01:12:41Z) - AugNet: End-to-End Unsupervised Visual Representation Learning with
Image Augmentation [3.6790362352712873]
We propose AugNet, a new deep learning training paradigm to learn image features from a collection of unlabeled pictures.
Our experiments demonstrate that the method is able to represent the image in low dimensional space.
Unlike many deep-learning-based image retrieval algorithms, our approach does not require access to external annotated datasets.
arXiv Detail & Related papers (2021-06-11T09:02:30Z) - Divide and Contrast: Self-supervised Learning from Uncurated Data [10.783599686099716]
Divide and Contrast (DnC) alternates between contrastive learning and clustering-based hard negative mining.
When pretrained on less curated datasets, DnC greatly improves the performance of self-supervised learning on downstream tasks.
arXiv Detail & Related papers (2021-05-17T17:59:03Z) - G-SimCLR : Self-Supervised Contrastive Learning with Guided Projection
via Pseudo Labelling [0.8164433158925593]
In computer vision, it is evident that deep neural networks perform better in a supervised setting with a large amount of labeled data.
In this work, we propose that, with the normalized temperature-scaled cross-entropy (NT-Xent) loss function, it is beneficial to not have images of the same category in the same batch.
We use the latent space representation of a denoising autoencoder trained on the unlabeled dataset and cluster them with k-means to obtain pseudo labels.
arXiv Detail & Related papers (2020-09-25T02:25:37Z) - Delving into Inter-Image Invariance for Unsupervised Visual
Representations [108.33534231219464]
We present a study to better understand the role of inter-image invariance learning.
Online labels converge faster than offline labels.
Semi-hard negative samples are more reliable and unbiased than hard negative samples.
arXiv Detail & Related papers (2020-08-26T17:44:23Z) - Unsupervised Landmark Learning from Unpaired Data [117.81440795184587]
Recent attempts for unsupervised landmark learning leverage synthesized image pairs that are similar in appearance but different in poses.
We propose a cross-image cycle consistency framework which applies the swapping-reconstruction strategy twice to obtain the final supervision.
Our proposed framework is shown to outperform strong baselines by a large margin.
arXiv Detail & Related papers (2020-06-29T13:57:20Z) - Unsupervised Image Classification for Deep Representation Learning [42.09716669386924]
We propose an unsupervised image classification framework without using embedding clustering.
Experiments on ImageNet dataset have been conducted to prove the effectiveness of our method.
arXiv Detail & Related papers (2020-06-20T02:57:06Z) - Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation
Learning [108.999497144296]
Recently advanced unsupervised learning approaches use the siamese-like framework to compare two "views" from the same image for learning representations.
This work aims to involve the distance concept on label space in the unsupervised learning and let the model be aware of the soft degree of similarity between positive or negative pairs.
Despite its conceptual simplicity, we show empirically that with the solution -- Unsupervised image mixtures (Un-Mix), we can learn subtler, more robust and generalized representations from the transformed input and corresponding new label space.
arXiv Detail & Related papers (2020-03-11T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.