Modality-Agnostic Debiasing for Single Domain Generalization
- URL: http://arxiv.org/abs/2303.07123v1
- Date: Mon, 13 Mar 2023 13:56:11 GMT
- Title: Modality-Agnostic Debiasing for Single Domain Generalization
- Authors: Sanqing Qu, Yingwei Pan, Guang Chen, Ting Yao, Changjun Jiang, Tao Mei
- Abstract summary: We introduce a versatile Modality-Agnostic Debiasing (MAD) framework for single-DG.
We show that MAD improves DSU by 2.82% and 1.5% in accuracy and mIOU.
More remarkably, for recognition on 3D point clouds and semantic segmentation on 2D images, MAD improves DSU by 1.5% in accuracy and mIOU.
- Score: 105.60451710436735
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep neural networks (DNNs) usually fail to generalize well to outside of
distribution (OOD) data, especially in the extreme case of single domain
generalization (single-DG) that transfers DNNs from single domain to multiple
unseen domains. Existing single-DG techniques commonly devise various
data-augmentation algorithms, and remould the multi-source domain
generalization methodology to learn domain-generalized (semantic) features.
Nevertheless, these methods are typically modality-specific, thereby being only
applicable to one single modality (e.g., image). In contrast, we target a
versatile Modality-Agnostic Debiasing (MAD) framework for single-DG, that
enables generalization for different modalities. Technically, MAD introduces a
novel two-branch classifier: a biased-branch encourages the classifier to
identify the domain-specific (superficial) features, and a general-branch
captures domain-generalized features based on the knowledge from biased-branch.
Our MAD is appealing in view that it is pluggable to most single-DG models. We
validate the superiority of our MAD in a variety of single-DG scenarios with
different modalities, including recognition on 1D texts, 2D images, 3D point
clouds, and semantic segmentation on 2D images. More remarkably, for
recognition on 3D point clouds and semantic segmentation on 2D images, MAD
improves DSU by 2.82\% and 1.5\% in accuracy and mIOU.
Related papers
- Uncertainty-guided Contrastive Learning for Single Source Domain Generalisation [15.907643838530655]
In this paper, we introduce a novel model referred to as Contrastive Uncertainty Domain Generalisation Network (CUDGNet)
The key idea is to augment the source capacity in both input and label spaces through the fictitious domain generator.
Our method also provides efficient uncertainty estimation at inference time from a single forward pass through the generator subnetwork.
arXiv Detail & Related papers (2024-03-12T10:47:45Z) - SUG: Single-dataset Unified Generalization for 3D Point Cloud
Classification [44.27324696068285]
We propose a Single-dataset Unified Generalization (SUG) framework to alleviate the unforeseen domain differences faced by a well-trained source model.
Specifically, we first design a Multi-grained Sub-domain Alignment (MSA) method, which can constrain the learned representations to be domain-agnostic and discriminative.
Then, a Sample-level Domain-aware Attention (SDA) strategy is presented, which can selectively enhance easy-to-adapt samples from different sub-domains.
arXiv Detail & Related papers (2023-05-16T04:36:04Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Learning to Diversify for Single Domain Generalization [46.35670520201863]
Domain generalization (DG) aims to generalize a model trained on multiple source (i.e., training) domains to a distributionally different target (i.e., test) domain.
This paper considers a more realistic yet challenging scenario, namely Single Domain Generalization (Single-DG), where only one source domain is available for training.
In this scenario, the limited diversity may jeopardize the model generalization on unseen target domains.
We propose a style-complement module to enhance the generalization power of the model by synthesizing images from diverse distributions that are complementary to the source ones.
arXiv Detail & Related papers (2021-08-26T12:04:32Z) - Dual Distribution Alignment Network for Generalizable Person
Re-Identification [174.36157174951603]
Domain generalization (DG) serves as a promising solution to handle person Re-Identification (Re-ID)
We present a Dual Distribution Alignment Network (DDAN) which handles this challenge by selectively aligning distributions of multiple source domains.
We evaluate our DDAN on a large-scale Domain Generalization Re-ID (DG Re-ID) benchmark.
arXiv Detail & Related papers (2020-07-27T00:08:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.