A Style and Semantic Memory Mechanism for Domain Generalization
- URL: http://arxiv.org/abs/2112.07517v1
- Date: Tue, 14 Dec 2021 16:23:24 GMT
- Title: A Style and Semantic Memory Mechanism for Domain Generalization
- Authors: Yang Chen and Yu Wang and Yingwei Pan and Ting Yao and Xinmei Tian and
Tao Mei
- Abstract summary: Intra-domain style invariance is of pivotal importance in improving the efficiency of domain generalization.
We propose a novel "jury" mechanism, which is particularly effective in learning useful semantic feature commonalities among domains.
Our proposed framework surpasses the state-of-the-art methods by clear margins.
- Score: 108.98041306507372
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mainstream state-of-the-art domain generalization algorithms tend to
prioritize the assumption on semantic invariance across domains. Meanwhile, the
inherent intra-domain style invariance is usually underappreciated and put on
the shelf. In this paper, we reveal that leveraging intra-domain style
invariance is also of pivotal importance in improving the efficiency of domain
generalization. We verify that it is critical for the network to be informative
on what domain features are invariant and shared among instances, so that the
network sharpens its understanding and improves its semantic discriminative
ability. Correspondingly, we also propose a novel "jury" mechanism, which is
particularly effective in learning useful semantic feature commonalities among
domains. Our complete model called STEAM can be interpreted as a novel
probabilistic graphical model, for which the implementation requires convenient
constructions of two kinds of memory banks: semantic feature bank and style
feature bank. Empirical results show that our proposed framework surpasses the
state-of-the-art methods by clear margins.
Related papers
- Generalize or Detect? Towards Robust Semantic Segmentation Under Multiple Distribution Shifts [56.57141696245328]
In open-world scenarios, where both novel classes and domains may exist, an ideal segmentation model should detect anomaly classes for safety.
Existing methods often struggle to distinguish between domain-level and semantic-level distribution shifts.
arXiv Detail & Related papers (2024-11-06T11:03:02Z) - Robust Unsupervised Domain Adaptation by Retaining Confident Entropy via
Edge Concatenation [7.953644697658355]
Unsupervised domain adaptation can mitigate the need for extensive pixel-level annotations to train semantic segmentation networks.
We introduce a novel approach to domain adaptation, leveraging the synergy of internal and external information within entropy-based adversarial networks.
We devised a probability-sharing network that integrates diverse information for more effective segmentation.
arXiv Detail & Related papers (2023-10-11T02:50:16Z) - Randomized Adversarial Style Perturbations for Domain Generalization [49.888364462991234]
We propose a novel domain generalization technique, referred to as Randomized Adversarial Style Perturbation (RASP)
The proposed algorithm perturbs the style of a feature in an adversarial direction towards a randomly selected class, and makes the model learn against being misled by the unexpected styles observed in unseen target domains.
We evaluate the proposed algorithm via extensive experiments on various benchmarks and show that our approach improves domain generalization performance, especially in large-scale benchmarks.
arXiv Detail & Related papers (2023-04-04T17:07:06Z) - Preserving Domain Private Representation via Mutual Information
Maximization [3.2597336130674317]
We propose an approach to preserve the representation that is private to the label-missing domain.
Our approach outperforms state-of-the-art methods on several public datasets.
arXiv Detail & Related papers (2022-01-09T22:55:57Z) - Context-Conditional Adaptation for Recognizing Unseen Classes in Unseen
Domains [48.17225008334873]
We propose a feature generative framework integrated with a COntext COnditional Adaptive (COCOA) Batch-Normalization.
The generated visual features better capture the underlying data distribution enabling us to generalize to unseen classes and domains at test-time.
We thoroughly evaluate and analyse our approach on established large-scale benchmark - DomainNet.
arXiv Detail & Related papers (2021-07-15T17:51:16Z) - A Bit More Bayesian: Domain-Invariant Learning with Uncertainty [111.22588110362705]
Domain generalization is challenging due to the domain shift and the uncertainty caused by the inaccessibility of target domain data.
In this paper, we address both challenges with a probabilistic framework based on variational Bayesian inference.
We derive domain-invariant representations and classifiers, which are jointly established in a two-layer Bayesian neural network.
arXiv Detail & Related papers (2021-05-09T21:33:27Z) - Learning to Learn with Variational Information Bottleneck for Domain
Generalization [128.90691697063616]
Domain generalization models learn to generalize to previously unseen domains, but suffer from prediction uncertainty and domain shift.
We introduce a probabilistic meta-learning model for domain generalization, in which parameters shared across domains are modeled as distributions.
To deal with domain shift, we learn domain-invariant representations by the proposed principle of meta variational information bottleneck, we call MetaVIB.
arXiv Detail & Related papers (2020-07-15T12:05:52Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.