Feature Extractor Stacking for Cross-domain Few-shot Learning
- URL: http://arxiv.org/abs/2205.05831v4
- Date: Tue, 24 Oct 2023 22:11:31 GMT
- Title: Feature Extractor Stacking for Cross-domain Few-shot Learning
- Authors: Hongyu Wang, Eibe Frank, Bernhard Pfahringer, Michael Mayo, Geoffrey
Holmes
- Abstract summary: Cross-domain few-shot learning addresses learning problems where knowledge needs to be transferred from one or more source domains into an instance-scarce target domain with an explicitly different distribution.
We propose feature extractor stacking (FES), a new CDFSL method for combining information from a collection of extractors out of the box.
We present the basic FES algorithm, which is inspired by the classic stacked generalisation approach, and also introduce two variants: convolutional FES (ConFES) and regularised FES (ReFES)
- Score: 7.624311495433939
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cross-domain few-shot learning (CDFSL) addresses learning problems where
knowledge needs to be transferred from one or more source domains into an
instance-scarce target domain with an explicitly different distribution.
Recently published CDFSL methods generally construct a universal model that
combines knowledge of multiple source domains into one feature extractor. This
enables efficient inference but necessitates re-computation of the extractor
whenever a new source domain is added. Some of these methods are also
incompatible with heterogeneous source domain extractor architectures. We
propose feature extractor stacking (FES), a new CDFSL method for combining
information from a collection of extractors, that can utilise heterogeneous
pretrained extractors out of the box and does not maintain a universal model
that needs to be re-computed when its extractor collection is updated. We
present the basic FES algorithm, which is inspired by the classic stacked
generalisation approach, and also introduce two variants: convolutional FES
(ConFES) and regularised FES (ReFES). Given a target-domain task, these
algorithms fine-tune each extractor independently, use cross-validation to
extract training data for stacked generalisation from the support set, and
learn a simple linear stacking classifier from this data. We evaluate our FES
methods on the well-known Meta-Dataset benchmark, targeting image
classification with convolutional neural networks, and show that they can
achieve state-of-the-art performance.
Related papers
- Source-Free Domain Adaptation Guided by Vision and Vision-Language Pre-Training [23.56208527227504]
Source-free domain adaptation (SFDA) aims to adapt a source model trained on a fully-labeled source domain to a related but unlabeled target domain.
In the conventional SFDA pipeline, a large data (e.g. ImageNet) pre-trained feature extractor is used to initialize the source model.
We introduce an integrated framework to incorporate pre-trained networks into the target adaptation process.
arXiv Detail & Related papers (2024-05-05T14:48:13Z) - Open Domain Generalization with a Single Network by Regularization
Exploiting Pre-trained Features [37.518025833882334]
Open Domain Generalization (ODG) is a challenging task as it deals with distribution shifts and category shifts.
Previous work has used multiple source-specific networks, which involve a high cost.
This paper proposes a method that can handle ODG using only a single network.
arXiv Detail & Related papers (2023-12-08T16:22:10Z) - Self-Paced Learning for Open-Set Domain Adaptation [50.620824701934]
Traditional domain adaptation methods presume that the classes in the source and target domains are identical.
Open-set domain adaptation (OSDA) addresses this limitation by allowing previously unseen classes in the target domain.
We propose a novel framework based on self-paced learning to distinguish common and unknown class samples.
arXiv Detail & Related papers (2023-03-10T14:11:09Z) - Style Interleaved Learning for Generalizable Person Re-identification [69.03539634477637]
We propose a novel style interleaved learning (IL) framework for DG ReID training.
Unlike conventional learning strategies, IL incorporates two forward propagations and one backward propagation for each iteration.
We show that our model consistently outperforms state-of-the-art methods on large-scale benchmarks for DG ReID.
arXiv Detail & Related papers (2022-07-07T07:41:32Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Universal Representation Learning from Multiple Domains for Few-shot
Classification [41.821234589075445]
We propose to learn a single set of universal deep representations by distilling knowledge of multiple separately trained networks.
We show that the universal representations can be further refined for previously unseen domains by an efficient adaptation step.
arXiv Detail & Related papers (2021-03-25T13:49:12Z) - Deep Domain-Adversarial Image Generation for Domain Generalisation [115.21519842245752]
Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution.
To overcome this problem, domain generalisation (DG) methods aim to leverage data from multiple source domains so that a trained model can generalise to unseen domains.
We propose a novel DG approach based on emphDeep Domain-Adversarial Image Generation (DDAIG)
arXiv Detail & Related papers (2020-03-12T23:17:47Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.