Self-Supervised Domain Adaptation with Consistency Training
- URL: http://arxiv.org/abs/2010.07539v1
- Date: Thu, 15 Oct 2020 06:03:47 GMT
- Title: Self-Supervised Domain Adaptation with Consistency Training
- Authors: L. Xiao, J. Xu, D. Zhao, Z. Wang, L. Wang, Y. Nie, B. Dai
- Abstract summary: We consider the problem of unsupervised domain adaptation for image classification.
We create a self-supervised pretext task by augmenting the unlabeled data with a certain type of transformation.
We force the representation of the augmented data to be consistent with that of the original data.
- Score: 0.2462953128215087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of unsupervised domain adaptation for image
classification. To learn target-domain-aware features from the unlabeled data,
we create a self-supervised pretext task by augmenting the unlabeled data with
a certain type of transformation (specifically, image rotation) and ask the
learner to predict the properties of the transformation. However, the obtained
feature representation may contain a large amount of irrelevant information
with respect to the main task. To provide further guidance, we force the
feature representation of the augmented data to be consistent with that of the
original data. Intuitively, the consistency introduces additional constraints
to representation learning, therefore, the learned representation is more
likely to focus on the right information about the main task. Our experimental
results validate the proposed method and demonstrate state-of-the-art
performance on classical domain adaptation benchmarks. Code is available at
https://github.com/Jiaolong/ss-da-consistency.
Related papers
- Attribute-Aware Deep Hashing with Self-Consistency for Large-Scale
Fine-Grained Image Retrieval [65.43522019468976]
We propose attribute-aware hashing networks with self-consistency for generating attribute-aware hash codes.
We develop an encoder-decoder structure network of a reconstruction task to unsupervisedly distill high-level attribute-specific vectors.
Our models are equipped with a feature decorrelation constraint upon these attribute vectors to strengthen their representative abilities.
arXiv Detail & Related papers (2023-11-21T08:20:38Z) - Self-Supervised Disentanglement by Leveraging Structure in Data Augmentations [63.73044203154743]
Self-supervised representation learning often uses data augmentations to induce "style" attributes of the data.
It is difficult to deduce a priori which attributes of the data are indeed "style" and can be safely discarded.
We introduce a more principled approach that seeks to disentangle style features rather than discard them.
arXiv Detail & Related papers (2023-11-15T09:34:08Z) - Unsupervised domain adaptation by learning using privileged information [6.748420131629902]
We show that training-time access to side information in the form of auxiliary variables can help relax restrictions on input variables.
We propose a simple two-stage learning algorithm, inspired by our analysis of the expected error in the target domain, and a practical end-to-end variant for image classification.
arXiv Detail & Related papers (2023-03-16T14:31:50Z) - Cross-Domain Aspect Extraction using Transformers Augmented with
Knowledge Graphs [3.662157175955389]
We propose a novel approach for automatically constructing domain-specific knowledge graphs that contain information relevant to the identification of aspect terms.
We demonstrate state-of-the-art performance on benchmark datasets for cross-domain aspect term extraction using our approach and investigate how the amount of external knowledge available to the Transformer impacts model performance.
arXiv Detail & Related papers (2022-10-18T20:18:42Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Disentanglement by Cyclic Reconstruction [0.0]
In supervised learning, information specific to the dataset used for training, but irrelevant to the task at hand, may remain encoded in the extracted representations.
We propose splitting the information into a task-related representation and its complementary context representation.
We then adapt this method to the unsupervised domain adaptation problem, consisting of training a model capable of performing on both a source and a target domain.
arXiv Detail & Related papers (2021-12-24T07:47:59Z) - Robust Representation Learning via Perceptual Similarity Metrics [18.842322467828502]
Contrastive Input Morphing (CIM) is a representation learning framework that learns input-space transformations of the data.
We show that CIM is complementary to other mutual information-based representation learning techniques.
arXiv Detail & Related papers (2021-06-11T21:45:44Z) - Conditional Contrastive Learning: Removing Undesirable Information in
Self-Supervised Representations [108.29288034509305]
We develop conditional contrastive learning to remove undesirable information in self-supervised representations.
We demonstrate empirically that our methods can successfully learn self-supervised representations for downstream tasks.
arXiv Detail & Related papers (2021-06-05T10:51:26Z) - i-Mix: A Domain-Agnostic Strategy for Contrastive Representation
Learning [117.63815437385321]
We propose i-Mix, a simple yet effective domain-agnostic regularization strategy for improving contrastive representation learning.
In experiments, we demonstrate that i-Mix consistently improves the quality of learned representations across domains.
arXiv Detail & Related papers (2020-10-17T23:32:26Z) - Uniform Priors for Data-Efficient Transfer [65.086680950871]
We show that features that are most transferable have high uniformity in the embedding space.
We evaluate the regularization on its ability to facilitate adaptation to unseen tasks and data.
arXiv Detail & Related papers (2020-06-30T04:39:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.