Universal Representation Learning from Multiple Domains for Few-shot
Classification
- URL: http://arxiv.org/abs/2103.13841v1
- Date: Thu, 25 Mar 2021 13:49:12 GMT
- Title: Universal Representation Learning from Multiple Domains for Few-shot
Classification
- Authors: Wei-Hong Li, Xialei Liu, Hakan Bilen
- Abstract summary: We propose to learn a single set of universal deep representations by distilling knowledge of multiple separately trained networks.
We show that the universal representations can be further refined for previously unseen domains by an efficient adaptation step.
- Score: 41.821234589075445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we look at the problem of few-shot classification that aims to
learn a classifier for previously unseen classes and domains from few labeled
samples. Recent methods use adaptation networks for aligning their features to
new domains or select the relevant features from multiple domain-specific
feature extractors. In this work, we propose to learn a single set of universal
deep representations by distilling knowledge of multiple separately trained
networks after co-aligning their features with the help of adapters and
centered kernel alignment. We show that the universal representations can be
further refined for previously unseen domains by an efficient adaptation step
in a similar spirit to distance learning methods. We rigorously evaluate our
model in the recent Meta-Dataset benchmark and demonstrate that it
significantly outperforms the previous methods while being more efficient. Our
code will be available at https://github.com/VICO-UoE/URL.
Related papers
- CDFSL-V: Cross-Domain Few-Shot Learning for Videos [58.37446811360741]
Few-shot video action recognition is an effective approach to recognizing new categories with only a few labeled examples.
Existing methods in video action recognition rely on large labeled datasets from the same domain.
We propose a novel cross-domain few-shot video action recognition method that leverages self-supervised learning and curriculum learning.
arXiv Detail & Related papers (2023-09-07T19:44:27Z) - Multi-Domain Long-Tailed Learning by Augmenting Disentangled
Representations [80.76164484820818]
There is an inescapable long-tailed class-imbalance issue in many real-world classification problems.
We study this multi-domain long-tailed learning problem and aim to produce a model that generalizes well across all classes and domains.
Built upon a proposed selective balanced sampling strategy, TALLY achieves this by mixing the semantic representation of one example with the domain-associated nuisances of another.
arXiv Detail & Related papers (2022-10-25T21:54:26Z) - Adversarial Feature Augmentation for Cross-domain Few-shot
Classification [2.68796389443975]
We propose a novel adversarial feature augmentation (AFA) method to bridge the domain gap in few-shot learning.
The proposed method is a plug-and-play module that can be easily integrated into existing few-shot learning methods.
arXiv Detail & Related papers (2022-08-23T15:10:22Z) - Style Interleaved Learning for Generalizable Person Re-identification [69.03539634477637]
We propose a novel style interleaved learning (IL) framework for DG ReID training.
Unlike conventional learning strategies, IL incorporates two forward propagations and one backward propagation for each iteration.
We show that our model consistently outperforms state-of-the-art methods on large-scale benchmarks for DG ReID.
arXiv Detail & Related papers (2022-07-07T07:41:32Z) - Few-Shot Classification in Unseen Domains by Episodic Meta-Learning
Across Visual Domains [36.98387822136687]
Few-shot classification aims to carry out classification given only few labeled examples for the categories of interest.
In this paper, we present a unique learning framework for domain-generalized few-shot classification.
By advancing meta-learning strategies, our learning framework exploits data across multiple source domains to capture domain-invariant features.
arXiv Detail & Related papers (2021-12-27T06:54:11Z) - Improving Task Adaptation for Cross-domain Few-shot Learning [41.821234589075445]
Cross-domain few-shot classification aims to learn a classifier from previously unseen classes and domains with few labeled samples.
We show that parametric adapters attached to convolutional layers with residual connections performs the best.
arXiv Detail & Related papers (2021-07-01T10:47:06Z) - Revisiting Contrastive Methods for Unsupervised Learning of Visual
Representations [78.12377360145078]
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection.
In this paper, we first study how biases in the dataset affect existing methods.
We show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets.
arXiv Detail & Related papers (2021-06-10T17:59:13Z) - A Universal Representation Transformer Layer for Few-Shot Image
Classification [43.31379752656756]
Few-shot classification aims to recognize unseen classes when presented with only a small number of samples.
We consider the problem of multi-domain few-shot image classification, where unseen classes and examples come from diverse data sources.
Here, we propose a Universal Representation Transformer layer, that meta-learns to leverage universal features for few-shot classification.
arXiv Detail & Related papers (2020-06-21T03:08:00Z) - Selecting Relevant Features from a Multi-domain Representation for
Few-shot Classification [91.67977602992657]
We propose a new strategy based on feature selection, which is both simpler and more effective than previous feature adaptation approaches.
We show that a simple non-parametric classifier built on top of such features produces high accuracy and generalizes to domains never seen during training.
arXiv Detail & Related papers (2020-03-20T15:44:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.