Combat Data Shift in Few-shot Learning with Knowledge Graph
- URL: http://arxiv.org/abs/2101.11354v1
- Date: Wed, 27 Jan 2021 12:35:18 GMT
- Title: Combat Data Shift in Few-shot Learning with Knowledge Graph
- Authors: Yongchun zhu, Fuzhen Zhuang, Xiangliang Zhang, Zhiyuan Qi, Zhiping Shi
and Qing He
- Abstract summary: In real-world applications, few-shot learning paradigm often suffers from data shift.
Most existing few-shot learning approaches are not designed with the consideration of data shift.
We propose a novel metric-based meta-learning framework to extract task-specific representations and task-shared representations.
- Score: 42.59886121530736
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many few-shot learning approaches have been designed under the meta-learning
framework, which learns from a variety of learning tasks and generalizes to new
tasks. These meta-learning approaches achieve the expected performance in the
scenario where all samples are drawn from the same distributions (i.i.d.
observations). However, in real-world applications, few-shot learning paradigm
often suffers from data shift, i.e., samples in different tasks, even in the
same task, could be drawn from various data distributions. Most existing
few-shot learning approaches are not designed with the consideration of data
shift, and thus show downgraded performance when data distribution shifts.
However, it is non-trivial to address the data shift problem in few-shot
learning, due to the limited number of labeled samples in each task. Targeting
at addressing this problem, we propose a novel metric-based meta-learning
framework to extract task-specific representations and task-shared
representations with the help of knowledge graph. The data shift within/between
tasks can thus be combated by the combination of task-shared and task-specific
representations. The proposed model is evaluated on popular benchmarks and two
constructed new challenging datasets. The evaluation results demonstrate its
remarkable performance.
Related papers
- Data-CUBE: Data Curriculum for Instruction-based Sentence Representation
Learning [85.66907881270785]
We propose a data curriculum method, namely Data-CUBE, that arranges the orders of all the multi-task data for training.
In the task level, we aim to find the optimal task order to minimize the total cross-task interference risk.
In the instance level, we measure the difficulty of all instances per task, then divide them into the easy-to-difficult mini-batches for training.
arXiv Detail & Related papers (2024-01-07T18:12:20Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - On Steering Multi-Annotations per Sample for Multi-Task Learning [79.98259057711044]
The study of multi-task learning has drawn great attention from the community.
Despite the remarkable progress, the challenge of optimally learning different tasks simultaneously remains to be explored.
Previous works attempt to modify the gradients from different tasks. Yet these methods give a subjective assumption of the relationship between tasks, and the modified gradient may be less accurate.
In this paper, we introduce Task Allocation(STA), a mechanism that addresses this issue by a task allocation approach, in which each sample is randomly allocated a subset of tasks.
For further progress, we propose Interleaved Task Allocation(ISTA) to iteratively allocate all
arXiv Detail & Related papers (2022-03-06T11:57:18Z) - The Effect of Diversity in Meta-Learning [79.56118674435844]
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples.
Recent studies show that task distribution plays a vital role in the model's performance.
We study different task distributions on a myriad of models and datasets to evaluate the effect of task diversity on meta-learning algorithms.
arXiv Detail & Related papers (2022-01-27T19:39:07Z) - Mixture of basis for interpretable continual learning with distribution
shifts [1.6114012813668934]
Continual learning in environments with shifting data distributions is a challenging problem with several real-world applications.
We propose a novel approach called mixture of Basismodels (MoB) for addressing this problem setting.
arXiv Detail & Related papers (2022-01-05T22:53:15Z) - Instance-Level Task Parameters: A Robust Multi-task Weighting Framework [17.639472693362926]
Recent works have shown that deep neural networks benefit from multi-task learning by learning a shared representation across several related tasks.
We let the training process dictate the optimal weighting of tasks for every instance in the dataset.
We conduct extensive experiments on SURREAL and CityScapes datasets, for human shape and pose estimation, depth estimation and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-11T02:35:42Z) - Adaptive Task Sampling for Meta-Learning [79.61146834134459]
Key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time.
We propose an adaptive task sampling method to improve the generalization performance.
arXiv Detail & Related papers (2020-07-17T03:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.