Contrastive Syn-to-Real Generalization
- URL: http://arxiv.org/abs/2104.02290v1
- Date: Tue, 6 Apr 2021 05:10:29 GMT
- Title: Contrastive Syn-to-Real Generalization
- Authors: Wuyang Chen, Zhiding Yu, Shalini De Mello, Sifei Liu, Jose M. Alvarez,
Zhangyang Wang, Anima Anandkumar
- Abstract summary: We make a key observation that the diversity of the learned feature embeddings plays an important role in the generalization performance.
We propose contrastive synthetic-to-real generalization (CSG), a novel framework that leverages the pre-trained ImageNet knowledge to prevent overfitting to the synthetic domain.
We demonstrate the effectiveness of CSG on various synthetic training tasks, exhibiting state-of-the-art performance on zero-shot domain generalization.
- Score: 125.54991489017854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training on synthetic data can be beneficial for label or data-scarce
scenarios. However, synthetically trained models often suffer from poor
generalization in real domains due to domain gaps. In this work, we make a key
observation that the diversity of the learned feature embeddings plays an
important role in the generalization performance. To this end, we propose
contrastive synthetic-to-real generalization (CSG), a novel framework that
leverages the pre-trained ImageNet knowledge to prevent overfitting to the
synthetic domain, while promoting the diversity of feature embeddings as an
inductive bias to improve generalization. In addition, we enhance the proposed
CSG framework with attentional pooling (A-pool) to let the model focus on
semantically important regions and further improve its generalization. We
demonstrate the effectiveness of CSG on various synthetic training tasks,
exhibiting state-of-the-art performance on zero-shot domain generalization.
Related papers
- Revisiting the Robust Generalization of Adversarial Prompt Tuning [4.033827046965844]
We propose an adaptive Consistency-guided Adrial Prompt Tuning (i.e., CAPT) framework to enhance the alignment of image and text features for adversarial examples.
We conduct experiments across 14 datasets and 4 data sparsity schemes to show the superiority of CAPT over other state-of-the-art adaption methods.
arXiv Detail & Related papers (2024-05-18T02:54:41Z) - Multi-Scale and Multi-Layer Contrastive Learning for Domain Generalization [5.124256074746721]
We argue that the generalization ability of deep convolutional neural networks can be improved by taking advantage of multi-layer and multi-scaled representations of the network.
We introduce a framework that aims at improving domain generalization of image classifiers by combining both low-level and high-level features at multiple scales.
We show that our model is able to surpass the performance of previous DG methods and consistently produce competitive and state-of-the-art results in all datasets.
arXiv Detail & Related papers (2023-08-28T08:54:27Z) - GCISG: Guided Causal Invariant Learning for Improved Syn-to-real
Generalization [1.2215956380648065]
Training a deep learning model with artificially generated data can be an alternative when training data are scarce.
In this paper, we characterize the domain gap by using a causal framework for data generation.
We propose causal invariance learning which encourages the model to learn a style-invariant representation that enhances the syn-to-real generalization.
arXiv Detail & Related papers (2022-08-22T02:39:05Z) - A Style and Semantic Memory Mechanism for Domain Generalization [108.98041306507372]
Intra-domain style invariance is of pivotal importance in improving the efficiency of domain generalization.
We propose a novel "jury" mechanism, which is particularly effective in learning useful semantic feature commonalities among domains.
Our proposed framework surpasses the state-of-the-art methods by clear margins.
arXiv Detail & Related papers (2021-12-14T16:23:24Z) - Towards Principled Disentanglement for Domain Generalization [90.9891372499545]
A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) data.
We first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG)
Based on the transformation, we propose a primal-dual algorithm for joint representation disentanglement and domain generalization.
arXiv Detail & Related papers (2021-11-27T07:36:32Z) - HCDG: A Hierarchical Consistency Framework for Domain Generalization on
Medical Image Segmentation [33.623948922908184]
We present a novel Hierarchical Consistency framework for Domain Generalization (HCDG)
For the Extrinsic Consistency, we leverage the knowledge across multiple source domains to enforce data-level consistency.
For the Intrinsic Consistency, we perform task-level consistency for the same instance under the dual-task scenario.
arXiv Detail & Related papers (2021-09-13T07:07:23Z) - Supercharging Imbalanced Data Learning With Energy-based Contrastive
Representation Transfer [72.5190560787569]
In computer vision, learning from long tailed datasets is a recurring theme, especially for natural image datasets.
Our proposal posits a meta-distributional scenario, where the data generating mechanism is invariant across the label-conditional feature distributions.
This allows us to leverage a causal data inflation procedure to enlarge the representation of minority classes.
arXiv Detail & Related papers (2020-11-25T00:13:11Z) - Automated Synthetic-to-Real Generalization [142.41531132965585]
We propose a textitlearning-to-optimize (L2O) strategy to automate the selection of layer-wise learning rates.
We demonstrate that the proposed framework can significantly improve the synthetic-to-real generalization performance without seeing and training on real data.
arXiv Detail & Related papers (2020-07-14T10:57:34Z) - Target-Embedding Autoencoders for Supervised Representation Learning [111.07204912245841]
This paper analyzes a framework for improving generalization in a purely supervised setting, where the target space is high-dimensional.
We motivate and formalize the general framework of target-embedding autoencoders (TEA) for supervised prediction, learning intermediate latent representations jointly optimized to be both predictable from features as well as predictive of targets.
arXiv Detail & Related papers (2020-01-23T02:37:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.