LFME: A Simple Framework for Learning from Multiple Experts in Domain Generalization
- URL: http://arxiv.org/abs/2410.17020v2
- Date: Fri, 25 Oct 2024 11:02:49 GMT
- Title: LFME: A Simple Framework for Learning from Multiple Experts in Domain Generalization
- Authors: Liang Chen, Yong Zhang, Yibing Song, Zhiqiang Shen, Lingqiao Liu,
- Abstract summary: Domain generalization (DG) methods aim to maintain good performance in an unseen target domain by using training data from multiple source domains.
This work introduces a simple yet effective framework, dubbed learning from multiple experts (LFME) that aims to make the target model an expert in all source domains to improve DG.
- Score: 61.16890890570814
- License:
- Abstract: Domain generalization (DG) methods aim to maintain good performance in an unseen target domain by using training data from multiple source domains. While success on certain occasions are observed, enhancing the baseline across most scenarios remains challenging. This work introduces a simple yet effective framework, dubbed learning from multiple experts (LFME), that aims to make the target model an expert in all source domains to improve DG. Specifically, besides learning the target model used in inference, LFME will also train multiple experts specialized in different domains, whose output probabilities provide professional guidance by simply regularizing the logit of the target model. Delving deep into the framework, we reveal that the introduced logit regularization term implicitly provides effects of enabling the target model to harness more information, and mining hard samples from the experts during training. Extensive experiments on benchmarks from different DG tasks demonstrate that LFME is consistently beneficial to the baseline and can achieve comparable performance to existing arts. Code is available at~\url{https://github.com/liangchen527/LFME}.
Related papers
- Rethinking Multi-domain Generalization with A General Learning Objective [19.28143363034362]
Multi-domain generalization (mDG) is universally aimed to minimize discrepancy between training and testing distributions.
Existing mDG literature lacks a general learning objective paradigm.
We propose to leverage a $Y$-mapping to relax the constraint.
arXiv Detail & Related papers (2024-02-29T05:00:30Z) - NormAUG: Normalization-guided Augmentation for Domain Generalization [60.159546669021346]
We propose a simple yet effective method called NormAUG (Normalization-guided Augmentation) for deep learning.
Our method introduces diverse information at the feature level and improves the generalization of the main path.
In the test stage, we leverage an ensemble strategy to combine the predictions from the auxiliary path of our model, further boosting performance.
arXiv Detail & Related papers (2023-07-25T13:35:45Z) - Meta Adaptive Task Sampling for Few-Domain Generalization [43.2043988610497]
Few-domain generalization (FDG) aims to learn a generalizable model from very few domains of novel tasks.
We propose a Meta Adaptive Task Sampling (MATS) procedure to differentiate base tasks according to their semantic and domain-shift similarity to the novel task.
arXiv Detail & Related papers (2023-05-25T01:44:09Z) - MultiMatch: Multi-task Learning for Semi-supervised Domain Generalization [55.06956781674986]
We resort to solving the semi-supervised domain generalization task, where there are a few label information in each source domain.
We propose MultiMatch, extending FixMatch to the multi-task learning framework, producing the high-quality pseudo-label for SSDG.
A series of experiments validate the effectiveness of the proposed method, and it outperforms the existing semi-supervised methods and the SSDG method on several benchmark DG datasets.
arXiv Detail & Related papers (2022-08-11T14:44:33Z) - More is Better: A Novel Multi-view Framework for Domain Generalization [28.12350681444117]
Key issue of domain generalization (DG) is how to prevent overfitting to the observed source domains.
By treating tasks and images as different views, we propose a novel multi-view DG framework.
In test stage, to alleviate unstable prediction, we utilize multiple augmented images to yield multi-view prediction.
arXiv Detail & Related papers (2021-12-23T02:51:35Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Unsupervised Domain Generalization for Person Re-identification: A
Domain-specific Adaptive Framework [50.88463458896428]
Domain generalization (DG) has attracted much attention in person re-identification (ReID) recently.
Existing methods usually need the source domains to be labeled, which could be a significant burden for practical ReID tasks.
We propose a simple and efficient domain-specific adaptive framework, and realize it with an adaptive normalization module.
arXiv Detail & Related papers (2021-11-30T02:35:51Z) - Multi-Target Domain Adaptation with Collaborative Consistency Learning [105.7615147382486]
We propose a collaborative learning framework to achieve unsupervised multi-target domain adaptation.
The proposed method can effectively exploit rich structured information contained in both labeled source domain and multiple unlabeled target domains.
arXiv Detail & Related papers (2021-06-07T08:36:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.