Heterogeneous Domain Generalization via Domain Mixup
- URL: http://arxiv.org/abs/2009.05448v1
- Date: Fri, 11 Sep 2020 13:53:56 GMT
- Title: Heterogeneous Domain Generalization via Domain Mixup
- Authors: Yufei Wang (1 and 2), Haoliang Li (2), and Alex C. Kot (2)((1)
University of Electronic Science and Technology of China, China, (2) Nanyang
Technological University, Singapore)
- Abstract summary: One of the main drawbacks of deep Convolutional Neural Networks (DCNN) is that they lack generalization capability.
We propose a novel heterogeneous domain generalization method by mixing up samples across multiple source domains.
Our experimental results based on the Visual Decathlon benchmark demonstrates the effectiveness of our proposed method.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the main drawbacks of deep Convolutional Neural Networks (DCNN) is
that they lack generalization capability. In this work, we focus on the problem
of heterogeneous domain generalization which aims to improve the generalization
capability across different tasks, which is, how to learn a DCNN model with
multiple domain data such that the trained feature extractor can be generalized
to supporting recognition of novel categories in a novel target domain. To
solve this problem, we propose a novel heterogeneous domain generalization
method by mixing up samples across multiple source domains with two different
sampling strategies. Our experimental results based on the Visual Decathlon
benchmark demonstrates the effectiveness of our proposed method. The code is
released in \url{https://github.com/wyf0912/MIXALL}
Related papers
- Generalize or Detect? Towards Robust Semantic Segmentation Under Multiple Distribution Shifts [56.57141696245328]
In open-world scenarios, where both novel classes and domains may exist, an ideal segmentation model should detect anomaly classes for safety.
Existing methods often struggle to distinguish between domain-level and semantic-level distribution shifts.
arXiv Detail & Related papers (2024-11-06T11:03:02Z) - Quantitatively Measuring and Contrastively Exploring Heterogeneity for
Domain Generalization [38.50749918578154]
We propose Heterogeneity-based Two-stage Contrastive Learning (HTCL) for the Domain generalization task.
In the first stage, we generate the most heterogeneous dividing pattern with our contrastive metric.
In the second stage, we employ an in-aimed contrastive learning by re-building pairs with the stable relation hinted by domains and classes.
arXiv Detail & Related papers (2023-05-25T09:42:43Z) - Domain Generalization through the Lens of Angular Invariance [44.76809026901016]
Domain generalization (DG) aims at generalizing a classifier trained on multiple source domains to an unseen target domain with domain shift.
We propose a novel deep DG method called Angular Invariance Domain Generalization Network (AIDGN)
arXiv Detail & Related papers (2022-10-28T02:05:38Z) - Multi-Domain Long-Tailed Learning by Augmenting Disentangled
Representations [80.76164484820818]
There is an inescapable long-tailed class-imbalance issue in many real-world classification problems.
We study this multi-domain long-tailed learning problem and aim to produce a model that generalizes well across all classes and domains.
Built upon a proposed selective balanced sampling strategy, TALLY achieves this by mixing the semantic representation of one example with the domain-associated nuisances of another.
arXiv Detail & Related papers (2022-10-25T21:54:26Z) - Improving Diversity with Adversarially Learned Transformations for
Domain Generalization [81.26960899663601]
We present a novel framework that uses adversarially learned transformations (ALT) using a neural network to model plausible, yet hard image transformations.
We show that ALT can naturally work with existing diversity modules to produce highly distinct, and large transformations of the source domain leading to state-of-the-art performance.
arXiv Detail & Related papers (2022-06-15T18:05:24Z) - Coarse to Fine: Domain Adaptive Crowd Counting via Adversarial Scoring
Network [58.05473757538834]
This paper proposes a novel adversarial scoring network (ASNet) to bridge the gap across domains from coarse to fine granularity.
Three sets of migration experiments show that the proposed methods achieve state-of-the-art counting performance.
arXiv Detail & Related papers (2021-07-27T14:47:24Z) - Generalizable Person Re-identification with Relevance-aware Mixture of
Experts [45.13716166680772]
We propose a novel method called the relevance-aware mixture of experts (RaMoE)
RaMoE uses an effective voting-based mixture mechanism to dynamically leverage source domains' diverse characteristics to improve the model's generalization.
Considering the target domains' invisibility during training, we propose a novel learning-to-learn algorithm combined with our relation alignment loss to update the voting network.
arXiv Detail & Related papers (2021-05-19T14:19:34Z) - Domain Generalization with MixStyle [120.52367818581608]
Domain generalization aims to address this problem by learning from a set of source domains a model that is generalizable to any unseen domain.
Our method, termed MixStyle, is motivated by the observation that visual domain is closely related to image style.
MixStyle fits into mini-batch training perfectly and is extremely easy to implement.
arXiv Detail & Related papers (2021-04-05T16:58:09Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Discrepancy Minimization in Domain Generalization with Generative
Nearest Neighbors [13.047289562445242]
Domain generalization (DG) deals with the problem of domain shift where a machine learning model trained on multiple-source domains fail to generalize well on a target domain with different statistics.
Multiple approaches have been proposed to solve the problem of domain generalization by learning domain invariant representations across the source domains that fail to guarantee generalization on the shifted target domain.
We propose a Generative Nearest Neighbor based Discrepancy Minimization (GNNDM) method which provides a theoretical guarantee that is upper bounded by the error in the labeling process of the target.
arXiv Detail & Related papers (2020-07-28T14:54:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.