i-Mix: A Domain-Agnostic Strategy for Contrastive Representation
Learning
- URL: http://arxiv.org/abs/2010.08887v2
- Date: Thu, 18 Mar 2021 07:13:31 GMT
- Title: i-Mix: A Domain-Agnostic Strategy for Contrastive Representation
Learning
- Authors: Kibok Lee, Yian Zhu, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin, Honglak
Lee
- Abstract summary: We propose i-Mix, a simple yet effective domain-agnostic regularization strategy for improving contrastive representation learning.
In experiments, we demonstrate that i-Mix consistently improves the quality of learned representations across domains.
- Score: 117.63815437385321
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive representation learning has shown to be effective to learn
representations from unlabeled data. However, much progress has been made in
vision domains relying on data augmentations carefully designed using domain
knowledge. In this work, we propose i-Mix, a simple yet effective
domain-agnostic regularization strategy for improving contrastive
representation learning. We cast contrastive learning as training a
non-parametric classifier by assigning a unique virtual class to each data in a
batch. Then, data instances are mixed in both the input and virtual label
spaces, providing more augmented data during training. In experiments, we
demonstrate that i-Mix consistently improves the quality of learned
representations across domains, including image, speech, and tabular data.
Furthermore, we confirm its regularization effect via extensive ablation
studies across model and dataset sizes. The code is available at
https://github.com/kibok90/imix.
Related papers
- SiamSeg: Self-Training with Contrastive Learning for Unsupervised Domain Adaptation Semantic Segmentation in Remote Sensing [14.007392647145448]
UDA enables models to learn from unlabeled target domain data while training on labeled source domain data.
We propose integrating contrastive learning into UDA, enhancing the model's capacity to capture semantic information.
Our SimSeg method outperforms existing approaches, achieving state-of-the-art results.
arXiv Detail & Related papers (2024-10-17T11:59:39Z) - Improving Deep Metric Learning by Divide and Conquer [11.380358587116683]
Deep metric learning (DML) is a cornerstone of many computer vision applications.
It aims at learning a mapping from the input domain to an embedding space, where semantically similar objects are located nearby and dissimilar objects far from another.
We propose to build a more expressive representation by splitting the embedding space and the data hierarchically into smaller sub-parts.
arXiv Detail & Related papers (2021-09-09T02:57:34Z) - ChessMix: Spatial Context Data Augmentation for Remote Sensing Semantic
Segmentation [1.0152838128195467]
ChessMix creates new synthetic images by mixing transformed mini-patches across the dataset in a chessboard-like grid.
Results in three diverse well-known remote sensing datasets show that ChessMix is capable of improving the segmentation of objects with few labeled pixels.
arXiv Detail & Related papers (2021-08-26T01:01:43Z) - MixStyle Neural Networks for Domain Generalization and Adaptation [122.36901703868321]
MixStyle is a plug-and-play module that can improve domain generalization performance without the need to collect more data or increase model capacity.
Our experiments show that MixStyle can significantly boost out-of-distribution generalization performance across a wide range of tasks including image recognition, instance retrieval and reinforcement learning.
arXiv Detail & Related papers (2021-07-05T14:29:19Z) - Revisiting Contrastive Methods for Unsupervised Learning of Visual
Representations [78.12377360145078]
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection.
In this paper, we first study how biases in the dataset affect existing methods.
We show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets.
arXiv Detail & Related papers (2021-06-10T17:59:13Z) - Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised
Pre-Training [67.71228426496013]
We show that using target domain data during pre-training leads to large performance improvements across a variety of setups.
We find that pre-training on multiple domains improves performance generalization on domains not seen during training.
arXiv Detail & Related papers (2021-04-02T12:53:15Z) - DomainMix: Learning Generalizable Person Re-Identification Without Human
Annotations [89.78473564527688]
This paper shows how to use labeled synthetic dataset and unlabeled real-world dataset to train a universal model.
In this way, human annotations are no longer required, and it is scalable to large and diverse real-world datasets.
Experimental results show that the proposed annotation-free method is more or less comparable to the counterpart trained with full human annotations.
arXiv Detail & Related papers (2020-11-24T08:15:53Z) - Self-Supervised Domain Adaptation with Consistency Training [0.2462953128215087]
We consider the problem of unsupervised domain adaptation for image classification.
We create a self-supervised pretext task by augmenting the unlabeled data with a certain type of transformation.
We force the representation of the augmented data to be consistent with that of the original data.
arXiv Detail & Related papers (2020-10-15T06:03:47Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.