Meta-Learning for One-Class Classification with Few Examples using
Order-Equivariant Network
- URL: http://arxiv.org/abs/2007.04459v3
- Date: Fri, 21 May 2021 20:05:24 GMT
- Title: Meta-Learning for One-Class Classification with Few Examples using
Order-Equivariant Network
- Authors: Ademola Oladosu, Tony Xu, Philip Ekfeldt, Brian A. Kelly, Miles
Cranmer, Shirley Ho, Adrian M. Price-Whelan, Gabriella Contardo
- Abstract summary: This paper presents a framework for few-shots One-Class Classification (OCC) at test-time.
We consider that we have a set of one-class classification' objective-tasks with only a small set of positive examples available for each task.
We propose an approach using order-equivariant networks to learn a'meta' binary-classifier.
- Score: 1.08890978642722
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a meta-learning framework for few-shots One-Class
Classification (OCC) at test-time, a setting where labeled examples are only
available for the positive class, and no supervision is given for the negative
example. We consider that we have a set of `one-class classification'
objective-tasks with only a small set of positive examples available for each
task, and a set of training tasks with full supervision (i.e. highly imbalanced
classification). We propose an approach using order-equivariant networks to
learn a 'meta' binary-classifier. The model will take as input an example to
classify from a given task, as well as the corresponding supervised set of
positive examples for this OCC task. Thus, the output of the model will be
'conditioned' on the available positive example of a given task, allowing to
predict on new tasks and new examples without labeled negative examples. In
this paper, we are motivated by an astronomy application. Our goal is to
identify if stars belong to a specific stellar group (the 'one-class' for a
given task), called \textit{stellar streams}, where each stellar stream is a
different OCC-task. We show that our method transfers well on unseen (test)
synthetic streams, and outperforms the baselines even though it is not
retrained and accesses a much smaller part of the data per task to predict
(only positive supervision). We see however that it doesn't transfer as well on
the real stream GD-1. This could come from intrinsic differences from the
synthetic and real stream, highlighting the need for consistency in the
'nature' of the task for this method. However, light fine-tuning improve
performances and outperform our baselines. Our experiments show encouraging
results to further explore meta-learning methods for OCC tasks.
Related papers
- Rethinking Multiple Instance Learning for Whole Slide Image Classification: A Good Instance Classifier is All You Need [18.832471712088353]
We propose an instance-level weakly supervised contrastive learning algorithm for the first time under the MIL setting.
We also propose an accurate pseudo label generation method through prototype learning.
arXiv Detail & Related papers (2023-07-05T12:44:52Z) - A Multi-Head Model for Continual Learning via Out-of-Distribution Replay [16.189891444511755]
Many approaches have been proposed to deal with catastrophic forgetting (CF) in continual learning (CL)
This paper proposes an entirely different approach that builds a separate classifier (head) for each task (called a multi-head model) using a transformer network, called MORE.
arXiv Detail & Related papers (2022-08-20T19:17:12Z) - Task-Adaptive Few-shot Node Classification [49.79924004684395]
We propose a task-adaptive node classification framework under the few-shot learning setting.
Specifically, we first accumulate meta-knowledge across classes with abundant labeled nodes.
Then we transfer such knowledge to the classes with limited labeled nodes via our proposed task-adaptive modules.
arXiv Detail & Related papers (2022-06-23T20:48:27Z) - Visual Transformer for Task-aware Active Learning [49.903358393660724]
We present a novel pipeline for pool-based Active Learning.
Our method exploits accessible unlabelled examples during training to estimate their co-relation with the labelled examples.
Visual Transformer models non-local visual concept dependency between labelled and unlabelled examples.
arXiv Detail & Related papers (2021-06-07T17:13:59Z) - Cross-Domain Few-Shot Classification via Adversarial Task Augmentation [16.112554109446204]
Few-shot classification aims to recognize unseen classes with few labeled samples from each class.
Many meta-learning models for few-shot classification elaborately design various task-shared inductive bias (meta-knowledge) to solve such tasks.
In this work, we aim to improve the robustness of the inductive bias through task augmentation.
arXiv Detail & Related papers (2021-04-29T14:51:53Z) - Conditional Meta-Learning of Linear Representations [57.90025697492041]
Standard meta-learning for representation learning aims to find a common representation to be shared across multiple tasks.
In this work we overcome this issue by inferring a conditioning function, mapping the tasks' side information into a representation tailored to the task at hand.
We propose a meta-algorithm capable of leveraging this advantage in practice.
arXiv Detail & Related papers (2021-03-30T12:02:14Z) - Adaptive Task Sampling for Meta-Learning [79.61146834134459]
Key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time.
We propose an adaptive task sampling method to improve the generalization performance.
arXiv Detail & Related papers (2020-07-17T03:15:53Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z) - Task-Adaptive Clustering for Semi-Supervised Few-Shot Classification [23.913195015484696]
Few-shot learning aims to handle previously unseen tasks using only a small amount of new training data.
In preparing (or meta-training) a few-shot learner, however, massive labeled data are necessary.
In this work, we propose a few-shot learner that can work well under the semi-supervised setting where a large portion of training data is unlabeled.
arXiv Detail & Related papers (2020-03-18T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.