Self-Supervised Representation Learning With MUlti-Segmental
Informational Coding (MUSIC)
- URL: http://arxiv.org/abs/2206.06461v1
- Date: Mon, 13 Jun 2022 20:37:48 GMT
- Title: Self-Supervised Representation Learning With MUlti-Segmental
Informational Coding (MUSIC)
- Authors: Chuang Niu and Ge Wang
- Abstract summary: Self-supervised representation learning maps high-dimensional data into a meaningful embedding space.
We propose MUlti-Segmental Informational Coding (MUSIC) for self-supervised representation learning.
- Score: 6.693379403133435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised representation learning maps high-dimensional data into a
meaningful embedding space, where samples of similar semantic contents are
close to each other. Most of the recent representation learning methods
maximize cosine similarity or minimize the distance between the embedding
features of different views from the same sample usually on the $l2$ normalized
unit hypersphere. To prevent the trivial solutions that all samples have the
same embedding feature, various techniques have been developed, such as
contrastive learning, stop gradient, variance and covariance regularization,
etc. In this study, we propose MUlti-Segmental Informational Coding (MUSIC) for
self-supervised representation learning. MUSIC divides the embedding feature
into multiple segments that discriminatively partition samples into different
semantic clusters and different segments focus on different partition
principles. Information theory measurements are directly used to optimize MUSIC
and theoretically guarantee trivial solutions are avoided. MUSIC does not
depend on commonly used techniques, such as memory bank or large batches,
asymmetry networks, gradient stopping, momentum weight updating, etc, making
the training framework flexible. Our experiments demonstrate that MUSIC
achieves better results than most related Barlow Twins and VICReg methods on
ImageNet classification with linear probing, and requires neither deep
projectors nor large feature dimensions. Code will be made available.
Related papers
- Learning Invariant Molecular Representation in Latent Discrete Space [52.13724532622099]
We propose a new framework for learning molecular representations that exhibit invariance and robustness against distribution shifts.
Our model achieves stronger generalization against state-of-the-art baselines in the presence of various distribution shifts.
arXiv Detail & Related papers (2023-10-22T04:06:44Z) - MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments [72.6405488990753]
Self-supervised learning can be used for mitigating the greedy needs of Vision Transformer networks.
We propose a single-stage and standalone method, MOCA, which unifies both desired properties.
We achieve new state-of-the-art results on low-shot settings and strong experimental results in various evaluation protocols.
arXiv Detail & Related papers (2023-07-18T15:46:20Z) - Multi-scale and Cross-scale Contrastive Learning for Semantic
Segmentation [5.281694565226513]
We apply contrastive learning to enhance the discriminative power of the multi-scale features extracted by semantic segmentation networks.
By first mapping the encoder's multi-scale representations to a common feature space, we instantiate a novel form of supervised local-global constraint.
arXiv Detail & Related papers (2022-03-25T01:24:24Z) - Improving Deep Metric Learning by Divide and Conquer [11.380358587116683]
Deep metric learning (DML) is a cornerstone of many computer vision applications.
It aims at learning a mapping from the input domain to an embedding space, where semantically similar objects are located nearby and dissimilar objects far from another.
We propose to build a more expressive representation by splitting the embedding space and the data hierarchically into smaller sub-parts.
arXiv Detail & Related papers (2021-09-09T02:57:34Z) - Generalized One-Class Learning Using Pairs of Complementary Classifiers [41.64645294104883]
One-class learning is the classic problem of fitting a model to the data for which annotations are available only for a single class.
In this paper, we explore novel objectives for one-class learning, which we collectively refer to as Generalized One-class Discriminative Subspaces (GODS)
arXiv Detail & Related papers (2021-06-24T18:52:05Z) - Remote Sensing Images Semantic Segmentation with General Remote Sensing
Vision Model via a Self-Supervised Contrastive Learning Method [13.479068312825781]
We propose Global style and Local matching Contrastive Learning Network (GLCNet) for remote sensing semantic segmentation.
Specifically, the global style contrastive module is used to learn an image-level representation better.
The local features matching contrastive module is designed to learn representations of local regions which is beneficial for semantic segmentation.
arXiv Detail & Related papers (2021-06-20T03:03:40Z) - Revisiting Contrastive Methods for Unsupervised Learning of Visual
Representations [78.12377360145078]
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection.
In this paper, we first study how biases in the dataset affect existing methods.
We show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets.
arXiv Detail & Related papers (2021-06-10T17:59:13Z) - Multi-scale Interactive Network for Salient Object Detection [91.43066633305662]
We propose the aggregate interaction modules to integrate the features from adjacent levels.
To obtain more efficient multi-scale features, the self-interaction modules are embedded in each decoder unit.
Experimental results on five benchmark datasets demonstrate that the proposed method without any post-processing performs favorably against 23 state-of-the-art approaches.
arXiv Detail & Related papers (2020-07-17T15:41:37Z) - DiVA: Diverse Visual Feature Aggregation for Deep Metric Learning [83.48587570246231]
Visual Similarity plays an important role in many computer vision applications.
Deep metric learning (DML) is a powerful framework for learning such similarities.
We propose and study multiple complementary learning tasks, targeting conceptually different data relationships.
We learn a single model to aggregate their training signals, resulting in strong generalization and state-of-the-art performance.
arXiv Detail & Related papers (2020-04-28T12:26:50Z) - Improving Few-shot Learning by Spatially-aware Matching and
CrossTransformer [116.46533207849619]
We study the impact of scale and location mismatch in the few-shot learning scenario.
We propose a novel Spatially-aware Matching scheme to effectively perform matching across multiple scales and locations.
arXiv Detail & Related papers (2020-01-06T14:10:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.