COLUMBUS: Automated Discovery of New Multi-Level Features for Domain
Generalization via Knowledge Corruption
- URL: http://arxiv.org/abs/2109.04320v1
- Date: Thu, 9 Sep 2021 14:52:05 GMT
- Title: COLUMBUS: Automated Discovery of New Multi-Level Features for Domain
Generalization via Knowledge Corruption
- Authors: Ahmed Frikha, Denis Krompa{\ss}, Volker Tresp
- Abstract summary: We address the challenging domain generalization problem, where a model trained on a set of source domains is expected to generalize well in unseen domains without exposure to their data.
We propose Columbus, a method that enforces new feature discovery via a targeted corruption of the most relevant input and multi-level representations of the data.
- Score: 12.555885317622131
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning models that can generalize to unseen domains are essential
when applied in real-world scenarios involving strong domain shifts. We address
the challenging domain generalization (DG) problem, where a model trained on a
set of source domains is expected to generalize well in unseen domains without
any exposure to their data. The main challenge of DG is that the features
learned from the source domains are not necessarily present in the unseen
target domains, leading to performance deterioration. We assume that learning a
richer set of features is crucial to improve the transfer to a wider set of
unknown domains. For this reason, we propose COLUMBUS, a method that enforces
new feature discovery via a targeted corruption of the most relevant input and
multi-level representations of the data. We conduct an extensive empirical
evaluation to demonstrate the effectiveness of the proposed approach which
achieves new state-of-the-art results by outperforming 18 DG algorithms on
multiple DG benchmark datasets in the DomainBed framework.
Related papers
- Disentangling Masked Autoencoders for Unsupervised Domain Generalization [57.56744870106124]
Unsupervised domain generalization is fast gaining attention but is still far from well-studied.
Disentangled Masked Auto (DisMAE) aims to discover the disentangled representations that faithfully reveal intrinsic features.
DisMAE co-trains the asymmetric dual-branch architecture with semantic and lightweight variation encoders.
arXiv Detail & Related papers (2024-07-10T11:11:36Z) - Overcoming Data Inequality across Domains with Semi-Supervised Domain
Generalization [4.921899151930171]
We propose a novel algorithm, ProUD, which can effectively learn domain-invariant features via domain-aware prototypes.
Our experiments on three different benchmark datasets demonstrate the effectiveness of ProUD.
arXiv Detail & Related papers (2024-03-08T10:49:37Z) - DIGIC: Domain Generalizable Imitation Learning by Causal Discovery [69.13526582209165]
Causality has been combined with machine learning to produce robust representations for domain generalization.
We make a different attempt by leveraging the demonstration data distribution to discover causal features for a domain generalizable policy.
We design a novel framework, called DIGIC, to identify the causal features by finding the direct cause of the expert action from the demonstration data distribution.
arXiv Detail & Related papers (2024-02-29T07:09:01Z) - Complementary Domain Adaptation and Generalization for Unsupervised
Continual Domain Shift Learning [4.921899151930171]
Unsupervised continual domain shift learning is a significant challenge in real-world applications.
We propose Complementary Domain Adaptation and Generalization (CoDAG), a simple yet effective learning framework.
Our approach is model-agnostic, meaning that it is compatible with any existing domain adaptation and generalization algorithms.
arXiv Detail & Related papers (2023-03-28T09:05:15Z) - Domain generalization Person Re-identification on Attention-aware
multi-operation strategery [8.90472129039969]
Domain generalization person re-identification (DG Re-ID) aims to directly deploy a model trained on the source domain to the unseen target domain with good generalization.
In the existing DG Re-ID methods, invariant operations are effective in extracting domain generalization features.
An Attention-aware Multi-operation Strategery (AMS) for DG Re-ID is proposed to extract more generalized features.
arXiv Detail & Related papers (2022-10-19T09:18:46Z) - Domain-Unified Prompt Representations for Source-Free Domain
Generalization [6.614361539661422]
Domain generalization is a surefire way toward general artificial intelligence.
It is difficult for existing methods to scale to diverse domains in open-world scenarios.
We propose an approach based on large-scale vision-language pretraining models.
arXiv Detail & Related papers (2022-09-29T16:44:09Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Unsupervised Domain Generalization for Person Re-identification: A
Domain-specific Adaptive Framework [50.88463458896428]
Domain generalization (DG) has attracted much attention in person re-identification (ReID) recently.
Existing methods usually need the source domains to be labeled, which could be a significant burden for practical ReID tasks.
We propose a simple and efficient domain-specific adaptive framework, and realize it with an adaptive normalization module.
arXiv Detail & Related papers (2021-11-30T02:35:51Z) - Reappraising Domain Generalization in Neural Networks [8.06370138649329]
Domain generalization (DG) of machine learning algorithms is defined as their ability to learn a domain agnostic hypothesis from multiple training distributions.
We find that a straightforward Empirical Risk Minimization (ERM) baseline consistently outperforms existing DG methods.
We propose a classwise-DG formulation, where for each class, we randomly select one of the domains and keep it aside for testing.
arXiv Detail & Related papers (2021-10-15T10:06:40Z) - Dual Distribution Alignment Network for Generalizable Person
Re-Identification [174.36157174951603]
Domain generalization (DG) serves as a promising solution to handle person Re-Identification (Re-ID)
We present a Dual Distribution Alignment Network (DDAN) which handles this challenge by selectively aligning distributions of multiple source domains.
We evaluate our DDAN on a large-scale Domain Generalization Re-ID (DG Re-ID) benchmark.
arXiv Detail & Related papers (2020-07-27T00:08:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.