Contextualizing Meta-Learning via Learning to Decompose
- URL: http://arxiv.org/abs/2106.08112v2
- Date: Mon, 18 Sep 2023 05:57:51 GMT
- Title: Contextualizing Meta-Learning via Learning to Decompose
- Authors: Han-Jia Ye, Da-Wei Zhou, Lanqing Hong, Zhenguo Li, Xiu-Shen Wei,
De-Chuan Zhan
- Abstract summary: We propose Learning to Decompose Network (LeadNet) to contextualize the meta-learned support-to-target'' strategy.
LeadNet learns to automatically select the strategy associated with the right via incorporating the change of comparison across contexts with polysemous embeddings.
- Score: 125.76658595408607
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Meta-learning has emerged as an efficient approach for constructing target
models based on support sets. For example, the meta-learned embeddings enable
the construction of target nearest-neighbor classifiers for specific tasks by
pulling instances closer to their same-class neighbors. However, a single
instance can be annotated from various latent attributes, making visually
similar instances inside or across support sets have different labels and
diverse relationships with others. Consequently, a uniform meta-learned
strategy for inferring the target model from the support set fails to capture
the instance-wise ambiguous similarity. To this end, we propose Learning to
Decompose Network (LeadNet) to contextualize the meta-learned
``support-to-target'' strategy, leveraging the context of instances with one or
mixed latent attributes in a support set. In particular, the comparison
relationship between instances is decomposed w.r.t. multiple embedding spaces.
LeadNet learns to automatically select the strategy associated with the right
attribute via incorporating the change of comparison across contexts} with
polysemous embeddings. We demonstrate the superiority of LeadNet in various
applications, including exploring multiple views of confusing data,
out-of-distribution recognition, and few-shot image classification.
Related papers
- Preserving Modality Structure Improves Multi-Modal Learning [64.10085674834252]
Self-supervised learning on large-scale multi-modal datasets allows learning semantically meaningful embeddings without relying on human annotations.
These methods often struggle to generalize well on out-of-domain data as they ignore the semantic structure present in modality-specific embeddings.
We propose a novel Semantic-Structure-Preserving Consistency approach to improve generalizability by preserving the modality-specific relationships in the joint embedding space.
arXiv Detail & Related papers (2023-08-24T20:46:48Z) - MIANet: Aggregating Unbiased Instance and General Information for
Few-Shot Semantic Segmentation [6.053853367809978]
Existing few-shot segmentation methods are based on the meta-learning strategy and extract instance knowledge from a support set.
We propose a multi-information aggregation network (MIANet) that effectively leverages the general knowledge, i.e., semantic word embeddings, and instance information for accurate segmentation.
Experiments on PASCAL-5i and COCO-20i show that MIANet yields superior performance and set a new state-of-the-art.
arXiv Detail & Related papers (2023-05-23T09:36:27Z) - Cooperative Self-Training for Multi-Target Adaptive Semantic
Segmentation [26.79776306494929]
We propose a self-training strategy that employs pseudo-labels to induce cooperation among multiple domain-specific classifiers.
We employ feature stylization as an efficient way to generate image views that forms an integral part of self-training.
arXiv Detail & Related papers (2022-10-04T13:03:17Z) - CAD: Co-Adapting Discriminative Features for Improved Few-Shot
Classification [11.894289991529496]
Few-shot classification is a challenging problem that aims to learn a model that can adapt to unseen classes given a few labeled samples.
Recent approaches pre-train a feature extractor, and then fine-tune for episodic meta-learning.
We propose a strategy to cross-attend and re-weight discriminative features for few-shot classification.
arXiv Detail & Related papers (2022-03-25T06:14:51Z) - Learning Prototype-oriented Set Representations for Meta-Learning [85.19407183975802]
Learning from set-structured data is a fundamental problem that has recently attracted increasing attention.
This paper provides a novel optimal transport based way to improve existing summary networks.
We further instantiate it to the cases of few-shot classification and implicit meta generative modeling.
arXiv Detail & Related papers (2021-10-18T09:49:05Z) - Improving Task Adaptation for Cross-domain Few-shot Learning [41.821234589075445]
Cross-domain few-shot classification aims to learn a classifier from previously unseen classes and domains with few labeled samples.
We show that parametric adapters attached to convolutional layers with residual connections performs the best.
arXiv Detail & Related papers (2021-07-01T10:47:06Z) - Multimodal Clustering Networks for Self-supervised Learning from
Unlabeled Videos [69.61522804742427]
This paper proposes a self-supervised training framework that learns a common multimodal embedding space.
We extend the concept of instance-level contrastive learning with a multimodal clustering step to capture semantic similarities across modalities.
The resulting embedding space enables retrieval of samples across all modalities, even from unseen datasets and different domains.
arXiv Detail & Related papers (2021-04-26T15:55:01Z) - Meta Learning for Few-Shot One-class Classification [0.0]
We formulate the learning of meaningful features for one-class classification as a meta-learning problem.
To learn these representations, we require only multiclass data from similar tasks.
We validate our approach by adapting few-shot classification datasets to the few-shot one-class classification scenario.
arXiv Detail & Related papers (2020-09-11T11:35:28Z) - Learning to Combine: Knowledge Aggregation for Multi-Source Domain
Adaptation [56.694330303488435]
We propose a Learning to Combine for Multi-Source Domain Adaptation (LtC-MSDA) framework.
In the nutshell, a knowledge graph is constructed on the prototypes of various domains to realize the information propagation among semantically adjacent representations.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-07-17T07:52:44Z) - Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning [79.25478727351604]
We explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric.
We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks.
arXiv Detail & Related papers (2020-03-09T20:06:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.