Learning Weakly-Supervised Contrastive Representations
- URL: http://arxiv.org/abs/2202.06670v1
- Date: Mon, 14 Feb 2022 12:57:31 GMT
- Title: Learning Weakly-Supervised Contrastive Representations
- Authors: Yao-Hung Hubert Tsai, Tianqin Li, Weixin Liu, Peiyuan Liao, Ruslan
Salakhutdinov, Louis-Philippe Morency
- Abstract summary: We present a two-stage weakly-supervised contrastive learning approach.
The first stage is to cluster data according to its auxiliary information.
The second stage is to learn similar representations within the same cluster and dissimilar representations for data from different clusters.
- Score: 104.42824068960668
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We argue that a form of the valuable information provided by the auxiliary
information is its implied data clustering information. For instance,
considering hashtags as auxiliary information, we can hypothesize that an
Instagram image will be semantically more similar with the same hashtags. With
this intuition, we present a two-stage weakly-supervised contrastive learning
approach. The first stage is to cluster data according to its auxiliary
information. The second stage is to learn similar representations within the
same cluster and dissimilar representations for data from different clusters.
Our empirical experiments suggest the following three contributions. First,
compared to conventional self-supervised representations, the
auxiliary-information-infused representations bring the performance closer to
the supervised representations, which use direct downstream labels as
supervision signals. Second, our approach performs the best in most cases, when
comparing our approach with other baseline representation learning methods that
also leverage auxiliary data information. Third, we show that our approach also
works well with unsupervised constructed clusters (e.g., no auxiliary
information), resulting in a strong unsupervised representation learning
approach.
Related papers
- Consistency Based Weakly Self-Supervised Learning for Human Activity Recognition with Wearables [1.565361244756411]
We describe a weakly self-supervised approach for recognizing human activities from sensor-based data.
We show that our approach can help the clustering algorithm achieve comparable performance in identifying and categorizing the underlying human activities.
arXiv Detail & Related papers (2024-07-29T06:29:21Z) - ACTIVE:Augmentation-Free Graph Contrastive Learning for Partial
Multi-View Clustering [52.491074276133325]
We propose an augmentation-free graph contrastive learning framework to solve the problem of partial multi-view clustering.
The proposed approach elevates instance-level contrastive learning and missing data inference to the cluster-level, effectively mitigating the impact of individual missing data on clustering.
arXiv Detail & Related papers (2022-03-01T02:32:25Z) - Unsupervised Representation Learning for 3D Point Cloud Data [66.92077180228634]
We propose a simple yet effective approach for unsupervised point cloud learning.
In particular, we identify a very useful transformation which generates a good contrastive version of an original point cloud.
We conduct experiments on three downstream tasks which are 3D object classification, shape part segmentation and scene segmentation.
arXiv Detail & Related papers (2021-10-13T10:52:45Z) - Clustering by Maximizing Mutual Information Across Views [62.21716612888669]
We propose a novel framework for image clustering that incorporates joint representation learning and clustering.
Our method significantly outperforms state-of-the-art single-stage clustering methods across a variety of image datasets.
arXiv Detail & Related papers (2021-07-24T15:36:49Z) - Integrating Auxiliary Information in Self-supervised Learning [94.11964997622435]
We first observe that the auxiliary information may bring us useful information about data structures.
We present to construct data clusters according to the auxiliary information.
We show that Cl-InfoNCE may be a better approach to leverage the data clustering information.
arXiv Detail & Related papers (2021-06-05T11:01:15Z) - Self-supervised Co-training for Video Representation Learning [103.69904379356413]
We investigate the benefit of adding semantic-class positives to instance-based Info Noise Contrastive Estimation training.
We propose a novel self-supervised co-training scheme to improve the popular infoNCE loss.
We evaluate the quality of the learnt representation on two different downstream tasks: action recognition and video retrieval.
arXiv Detail & Related papers (2020-10-19T17:59:01Z) - Learning Robust Representations via Multi-View Information Bottleneck [41.65544605954621]
Original formulation requires labeled data to identify superfluous information.
We extend this ability to the multi-view unsupervised setting, where two views of the same underlying entity are provided but the label is unknown.
A theoretical analysis leads to the definition of a new multi-view model that produces state-of-the-art results on the Sketchy dataset and label-limited versions of the MIR-Flickr dataset.
arXiv Detail & Related papers (2020-02-17T16:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.