Selecting task with optimal transport self-supervised learning for
few-shot classification
- URL: http://arxiv.org/abs/2204.00289v1
- Date: Fri, 1 Apr 2022 08:45:29 GMT
- Title: Selecting task with optimal transport self-supervised learning for
few-shot classification
- Authors: Renjie Xu, Xinghao Yang, Baodi Liu, Kai Zhang, Weifeng Liu
- Abstract summary: Few-Shot classification aims at solving problems that only a few samples are available in the training process.
We propose a novel task selecting algorithm, named Optimal Transport Task Selecting (OTTS), to construct a training set by selecting similar tasks for Few-Shot learning.
OTTS measures the task similarity by calculating the optimal transport distance and completes the model training via a self-supervised strategy.
- Score: 15.088213168796772
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-Shot classification aims at solving problems that only a few samples are
available in the training process. Due to the lack of samples, researchers
generally employ a set of training tasks from other domains to assist the
target task, where the distribution between assistant tasks and the target task
is usually different. To reduce the distribution gap, several lines of methods
have been proposed, such as data augmentation and domain alignment. However,
one common drawback of these algorithms is that they ignore the similarity task
selection before training. The fundamental problem is to push the auxiliary
tasks close to the target task. In this paper, we propose a novel task
selecting algorithm, named Optimal Transport Task Selecting (OTTS), to
construct a training set by selecting similar tasks for Few-Shot learning.
Specifically, the OTTS measures the task similarity by calculating the optimal
transport distance and completes the model training via a self-supervised
strategy. By utilizing the selected tasks with OTTS, the training process of
Few-Shot learning become more stable and effective. Other proposed methods
including data augmentation and domain alignment can be used in the meantime
with OTTS. We conduct extensive experiments on a variety of datasets, including
MiniImageNet, CIFAR, CUB, Cars, and Places, to evaluate the effectiveness of
OTTS. Experimental results validate that our OTTS outperforms the typical
baselines, i.e., MAML, matchingnet, protonet, by a large margin (averagely
1.72\% accuracy improvement).
Related papers
- Data-Efficient and Robust Task Selection for Meta-Learning [1.4557421099695473]
We propose the Data-Efficient and Robust Task Selection (DERTS) algorithm, which can be incorporated into both gradient and metric-based meta-learning algorithms.
DERTS selects weighted subsets of tasks from task pools by minimizing the approximation error of the full gradient of task pools in the meta-training stage.
Unlike existing algorithms, DERTS does not require any architecture modification for training and can handle noisy label data in both the support and query sets.
arXiv Detail & Related papers (2024-05-11T19:47:27Z) - Episodic-free Task Selection for Few-shot Learning [2.508902852545462]
We propose a novel meta-training framework beyond episodic training.
episodic tasks are not used directly for training, but for evaluating the effectiveness of some selected episodic-free tasks.
In experiments, the training task set contains some promising types, e. g., contrastive learning and classification.
arXiv Detail & Related papers (2024-01-31T10:52:15Z) - Task Selection and Assignment for Multi-modal Multi-task Dialogue Act
Classification with Non-stationary Multi-armed Bandits [11.682678945754837]
Multi-task learning (MTL) aims to improve the performance of a primary task by jointly learning with related auxiliary tasks.
Previous studies suggest that such a random selection of tasks may not be helpful, and can even be harmful to performance.
This paper proposes a method for selecting and assigning tasks based on non-stationary multi-armed bandits.
arXiv Detail & Related papers (2023-09-18T14:51:51Z) - Reinforcement Learning with Success Induced Task Prioritization [68.8204255655161]
We introduce Success Induced Task Prioritization (SITP), a framework for automatic curriculum learning.
The algorithm selects the order of tasks that provide the fastest learning for agents.
We demonstrate that SITP matches or surpasses the results of other curriculum design methods.
arXiv Detail & Related papers (2022-12-30T12:32:43Z) - Task Adaptive Parameter Sharing for Multi-Task Learning [114.80350786535952]
Adaptive Task Adapting Sharing (TAPS) is a method for tuning a base model to a new task by adaptively modifying a small, task-specific subset of layers.
Compared to other methods, TAPS retains high accuracy on downstream tasks while introducing few task-specific parameters.
We evaluate our method on a suite of fine-tuning tasks and architectures (ResNet, DenseNet, ViT) and show that it achieves state-of-the-art performance while being simple to implement.
arXiv Detail & Related papers (2022-03-30T23:16:07Z) - Meta-learning with an Adaptive Task Scheduler [93.63502984214918]
Existing meta-learning algorithms randomly sample meta-training tasks with a uniform probability.
It is likely that tasks are detrimental with noise or imbalanced given a limited number of meta-training tasks.
We propose an adaptive task scheduler (ATS) for the meta-training process.
arXiv Detail & Related papers (2021-10-26T22:16:35Z) - Weighted Training for Cross-Task Learning [71.94908559469475]
We introduce Target-Aware Weighted Training (TAWT), a weighted training algorithm for cross-task learning.
We show that TAWT is easy to implement, is computationally efficient, requires little hyper parameter tuning, and enjoys non-asymptotic learning-theoretic guarantees.
As a byproduct, the proposed representation-based task distance allows one to reason in a theoretically principled way about several critical aspects of cross-task learning.
arXiv Detail & Related papers (2021-05-28T20:27:02Z) - Adaptive Task Sampling for Meta-Learning [79.61146834134459]
Key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time.
We propose an adaptive task sampling method to improve the generalization performance.
arXiv Detail & Related papers (2020-07-17T03:15:53Z) - Expert Training: Task Hardness Aware Meta-Learning for Few-Shot
Classification [62.10696018098057]
We propose an easy-to-hard expert meta-training strategy to arrange the training tasks properly.
A task hardness aware module is designed and integrated into the training procedure to estimate the hardness of a task.
Experimental results on the miniImageNet and tieredImageNetSketch datasets show that the meta-learners can obtain better results with our expert training strategy.
arXiv Detail & Related papers (2020-07-13T08:49:00Z) - Few Is Enough: Task-Augmented Active Meta-Learning for Brain Cell
Classification [8.998976678920236]
We propose a tAsk-auGmented actIve meta-LEarning (AGILE) method to efficiently adapt Deep Neural Networks to new tasks.
AGILE combines a meta-learning algorithm with a novel task augmentation technique which we use to generate an initial adaptive model.
We show that the proposed task-augmented meta-learning framework can learn to classify new cell types after a single gradient step.
arXiv Detail & Related papers (2020-07-09T18:03:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.