DecAug: Out-of-Distribution Generalization via Decomposed Feature
Representation and Semantic Augmentation
- URL: http://arxiv.org/abs/2012.09382v1
- Date: Thu, 17 Dec 2020 03:46:09 GMT
- Title: DecAug: Out-of-Distribution Generalization via Decomposed Feature
Representation and Semantic Augmentation
- Authors: Haoyue Bai, Rui Sun, Lanqing Hong, Fengwei Zhou, Nanyang Ye, Han-Jia
Ye, S.-H. Gary Chan, Zhenguo Li
- Abstract summary: deep learning often suffers from out-of-distribution (OoD) generalization.
We propose DecAug, a novel decomposed feature representation and semantic augmentation approach for OoD generalization.
We show that DecAug outperforms other state-of-the-art methods on various OoD datasets.
- Score: 29.18840132995509
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While deep learning demonstrates its strong ability to handle independent and
identically distributed (IID) data, it often suffers from out-of-distribution
(OoD) generalization, where the test data come from another distribution
(w.r.t. the training one). Designing a general OoD generalization framework to
a wide range of applications is challenging, mainly due to possible correlation
shift and diversity shift in the real world. Most of the previous approaches
can only solve one specific distribution shift, such as shift across domains or
the extrapolation of correlation. To address that, we propose DecAug, a novel
decomposed feature representation and semantic augmentation approach for OoD
generalization. DecAug disentangles the category-related and context-related
features. Category-related features contain causal information of the target
object, while context-related features describe the attributes, styles,
backgrounds, or scenes, causing distribution shifts between training and test
data. The decomposition is achieved by orthogonalizing the two gradients
(w.r.t. intermediate features) of losses for predicting category and context
labels. Furthermore, we perform gradient-based augmentation on context-related
features to improve the robustness of the learned representations. Experimental
results show that DecAug outperforms other state-of-the-art methods on various
OoD datasets, which is among the very few methods that can deal with different
types of OoD generalization challenges.
Related papers
- First-Order Manifold Data Augmentation for Regression Learning [4.910937238451485]
We introduce FOMA: a new data-driven domain-independent data augmentation method.
We evaluate FOMA on in-distribution generalization and out-of-distribution benchmarks, and we show that it improves the generalization of several neural architectures.
arXiv Detail & Related papers (2024-06-16T12:35:05Z) - Learning Divergence Fields for Shift-Robust Graph Representations [73.11818515795761]
In this work, we propose a geometric diffusion model with learnable divergence fields for the challenging problem with interdependent data.
We derive a new learning objective through causal inference, which can guide the model to learn generalizable patterns of interdependence that are insensitive across domains.
arXiv Detail & Related papers (2024-06-07T14:29:21Z) - Algorithmic Fairness Generalization under Covariate and Dependence Shifts Simultaneously [28.24666589680547]
We introduce a simple but effective approach that aims to learn a fair and invariant classifier.
By augmenting various synthetic data domains through the model, we learn a fair and invariant classifier in source domains.
This classifier can then be generalized to unknown target domains, maintaining both model prediction and fairness concerns.
arXiv Detail & Related papers (2023-11-23T05:52:00Z) - Fine-grained Recognition with Learnable Semantic Data Augmentation [68.48892326854494]
Fine-grained image recognition is a longstanding computer vision challenge.
We propose diversifying the training data at the feature-level to alleviate the discriminative region loss problem.
Our method significantly improves the generalization performance on several popular classification networks.
arXiv Detail & Related papers (2023-09-01T11:15:50Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - Unleashing the Power of Graph Data Augmentation on Covariate
Distribution Shift [50.98086766507025]
We propose a simple-yet-effective data augmentation strategy, Adversarial Invariant Augmentation (AIA)
AIA aims to extrapolate and generate new environments, while concurrently preserving the original stable features during the augmentation process.
arXiv Detail & Related papers (2022-11-05T07:55:55Z) - Causal Transportability for Visual Recognition [70.13627281087325]
We show that standard classifiers fail because the association between images and labels is not transportable across settings.
We then show that the causal effect, which severs all sources of confounding, remains invariant across domains.
This motivates us to develop an algorithm to estimate the causal effect for image classification.
arXiv Detail & Related papers (2022-04-26T15:02:11Z) - Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data
Augmentation for Long-Tailed Classification [17.08583412899347]
Real-world data often follows a long-tailed distribution, which makes the performance of existing classification algorithms degrade heavily.
We propose a novel reasoning-based implicit semantic data augmentation method to borrow transformation directions from other classes.
Experimental results on CIFAR-100-LT, ImageNet-LT, and iNaturalist 2018 have demonstrated the effectiveness of our proposed method.
arXiv Detail & Related papers (2021-12-15T07:14:39Z) - Learning Domain Invariant Representations for Generalizable Person
Re-Identification [71.35292121563491]
Generalizable person Re-Identification (ReID) has attracted growing attention in recent computer vision community.
We introduce causality into person ReID and propose a novel generalizable framework, named Domain Invariant Representations for generalizable person Re-Identification (DIR-ReID)
arXiv Detail & Related papers (2021-03-29T18:59:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.