Context-Conditional Adaptation for Recognizing Unseen Classes in Unseen
Domains
- URL: http://arxiv.org/abs/2107.07497v1
- Date: Thu, 15 Jul 2021 17:51:16 GMT
- Title: Context-Conditional Adaptation for Recognizing Unseen Classes in Unseen
Domains
- Authors: Puneet Mangla, Shivam Chandhok, Vineeth N Balasubramanian and Fahad
Shahbaz Khan
- Abstract summary: We propose a feature generative framework integrated with a COntext COnditional Adaptive (COCOA) Batch-Normalization.
The generated visual features better capture the underlying data distribution enabling us to generalize to unseen classes and domains at test-time.
We thoroughly evaluate and analyse our approach on established large-scale benchmark - DomainNet.
- Score: 48.17225008334873
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent progress towards designing models that can generalize to unseen
domains (i.e domain generalization) or unseen classes (i.e zero-shot learning)
has embarked interest towards building models that can tackle both domain-shift
and semantic shift simultaneously (i.e zero-shot domain generalization). For
models to generalize to unseen classes in unseen domains, it is crucial to
learn feature representation that preserves class-level (domain-invariant) as
well as domain-specific information. Motivated from the success of generative
zero-shot approaches, we propose a feature generative framework integrated with
a COntext COnditional Adaptive (COCOA) Batch-Normalization to seamlessly
integrate class-level semantic and domain-specific information. The generated
visual features better capture the underlying data distribution enabling us to
generalize to unseen classes and domains at test-time. We thoroughly evaluate
and analyse our approach on established large-scale benchmark - DomainNet and
demonstrate promising performance over baselines and state-of-art methods.
Related papers
- Generalize or Detect? Towards Robust Semantic Segmentation Under Multiple Distribution Shifts [56.57141696245328]
In open-world scenarios, where both novel classes and domains may exist, an ideal segmentation model should detect anomaly classes for safety.
Existing methods often struggle to distinguish between domain-level and semantic-level distribution shifts.
arXiv Detail & Related papers (2024-11-06T11:03:02Z) - Learning to Generalize Unseen Domains via Multi-Source Meta Learning for Text Classification [71.08024880298613]
We study the multi-source Domain Generalization of text classification.
We propose a framework to use multiple seen domains to train a model that can achieve high accuracy in an unseen domain.
arXiv Detail & Related papers (2024-09-20T07:46:21Z) - Cross-Domain Ensemble Distillation for Domain Generalization [17.575016642108253]
We propose a simple yet effective method for domain generalization, named cross-domain ensemble distillation (XDED)
Our method generates an ensemble of the output logits from training data with the same label but from different domains and then penalizes each output for the mismatch with the ensemble.
We show that models learned by our method are robust against adversarial attacks and image corruptions.
arXiv Detail & Related papers (2022-11-25T12:32:36Z) - Domain Generalisation for Object Detection under Covariate and Concept Shift [10.32461766065764]
Domain generalisation aims to promote the learning of domain-invariant features while suppressing domain-specific features.
An approach to domain generalisation for object detection is proposed, the first such approach applicable to any object detection architecture.
arXiv Detail & Related papers (2022-03-10T11:14:18Z) - Meta-Learned Feature Critics for Domain Generalized Semantic
Segmentation [38.81908956978064]
We propose a novel meta-learning scheme with feature disentanglement ability, which derives domain-invariant features for semantic segmentation with domain generalization guarantees.
Our results on benchmark datasets confirm the effectiveness and robustness of our proposed model.
arXiv Detail & Related papers (2021-12-27T06:43:39Z) - TAL: Two-stream Adaptive Learning for Generalizable Person
Re-identification [115.31432027711202]
We argue that both domain-specific and domain-invariant features are crucial for improving the generalization ability of re-id models.
We name two-stream adaptive learning (TAL) to simultaneously model these two kinds of information.
Our framework can be applied to both single-source and multi-source domain generalization tasks.
arXiv Detail & Related papers (2021-11-29T01:27:42Z) - Structured Latent Embeddings for Recognizing Unseen Classes in Unseen
Domains [108.11746235308046]
We propose a novel approach that learns domain-agnostic structured latent embeddings by projecting images from different domains.
Our experiments on the challenging DomainNet and DomainNet-LS benchmarks show the superiority of our approach over existing methods.
arXiv Detail & Related papers (2021-07-12T17:57:46Z) - Robust Domain-Free Domain Generalization with Class-aware Alignment [4.442096198968069]
Domain-Free Domain Generalization (DFDG) is a model-agnostic method to achieve better generalization performance on the unseen test domain.
DFDG uses novel strategies to learn domain-invariant class-discriminative features.
It obtains competitive performance on both time series sensor and image classification public datasets.
arXiv Detail & Related papers (2021-02-17T17:46:06Z) - Cluster, Split, Fuse, and Update: Meta-Learning for Open Compound Domain
Adaptive Semantic Segmentation [102.42638795864178]
We propose a principled meta-learning based approach to OCDA for semantic segmentation.
We cluster target domain into multiple sub-target domains by image styles, extracted in an unsupervised manner.
A meta-learner is thereafter deployed to learn to fuse sub-target domain-specific predictions, conditioned upon the style code.
We learn to online update the model by model-agnostic meta-learning (MAML) algorithm, thus to further improve generalization.
arXiv Detail & Related papers (2020-12-15T13:21:54Z) - Learning to Balance Specificity and Invariance for In and Out of Domain
Generalization [27.338573739304604]
We introduce Domain-specific Masks for Generalization, a model for improving both in-domain and out-of-domain generalization performance.
For domain generalization, the goal is to learn from a set of source domains to produce a single model that will best generalize to an unseen target domain.
We demonstrate competitive performance compared to naive baselines and state-of-the-art methods on both PACS and DomainNet.
arXiv Detail & Related papers (2020-08-28T20:39:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.