Batch Normalization Embeddings for Deep Domain Generalization
- URL: http://arxiv.org/abs/2011.12672v3
- Date: Tue, 18 May 2021 09:58:12 GMT
- Title: Batch Normalization Embeddings for Deep Domain Generalization
- Authors: Mattia Segu, Alessio Tonioni, Federico Tombari
- Abstract summary: Domain generalization aims at training machine learning models to perform robustly across different and unseen domains.
We show a significant increase in classification accuracy over current state-of-the-art techniques on popular domain generalization benchmarks.
- Score: 50.51405390150066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalization aims at training machine learning models to perform
robustly across different and unseen domains. Several recent methods use
multiple datasets to train models to extract domain-invariant features, hoping
to generalize to unseen domains. Instead, first we explicitly train
domain-dependant representations by using ad-hoc batch normalization layers to
collect independent domain's statistics. Then, we propose to use these
statistics to map domains in a shared latent space, where membership to a
domain can be measured by means of a distance function. At test time, we
project samples from an unknown domain into the same space and infer properties
of their domain as a linear combination of the known ones. We apply the same
mapping strategy at training and test time, learning both a latent
representation and a powerful but lightweight ensemble model. We show a
significant increase in classification accuracy over current state-of-the-art
techniques on popular domain generalization benchmarks: PACS, Office-31 and
Office-Caltech.
Related papers
- Improving Domain Generalization with Domain Relations [77.63345406973097]
This paper focuses on domain shifts, which occur when the model is applied to new domains that are different from the ones it was trained on.
We propose a new approach called D$3$G to learn domain-specific models.
Our results show that D$3$G consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-06T08:11:16Z) - Domain-General Crowd Counting in Unseen Scenarios [25.171343652312974]
Domain shift across crowd data severely hinders crowd counting models to generalize to unseen scenarios.
We introduce a dynamic sub-domain division scheme which divides the source domain into multiple sub-domains.
In order to disentangle domain-invariant information from domain-specific information in image features, we design the domain-invariant and -specific crowd memory modules.
arXiv Detail & Related papers (2022-12-05T19:52:28Z) - Domain Generalization via Selective Consistency Regularization for Time
Series Classification [16.338176636365752]
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains.
We propose a novel representation learning methodology that selectively enforces prediction consistency between source domains.
arXiv Detail & Related papers (2022-06-16T01:57:35Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - Efficient Hierarchical Domain Adaptation for Pretrained Language Models [77.02962815423658]
Generative language models are trained on diverse, general domain corpora.
We introduce a method to scale domain adaptation to many diverse domains using a computationally efficient adapter approach.
arXiv Detail & Related papers (2021-12-16T11:09:29Z) - Adaptive Methods for Aggregated Domain Generalization [26.215904177457997]
In many settings, privacy concerns prohibit obtaining domain labels for the training data samples.
We propose a domain-adaptive approach to this problem, which operates in two steps.
Our approach achieves state-of-the-art performance on a variety of domain generalization benchmarks without using domain labels.
arXiv Detail & Related papers (2021-12-09T08:57:01Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Adaptive Domain-Specific Normalization for Generalizable Person
Re-Identification [81.30327016286009]
We propose a novel adaptive domain-specific normalization approach (AdsNorm) for generalizable person Re-ID.
In this work, we propose a novel adaptive domain-specific normalization approach (AdsNorm) for generalizable person Re-ID.
arXiv Detail & Related papers (2021-05-07T02:54:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.