Few-Shot One-Class Classification via Meta-Learning
- URL: http://arxiv.org/abs/2007.04146v2
- Date: Thu, 11 Feb 2021 18:52:13 GMT
- Title: Few-Shot One-Class Classification via Meta-Learning
- Authors: Ahmed Frikha, Denis Krompa{\ss}, Hans-Georg K\"opken and Volker Tresp
- Abstract summary: We study the intersection of few-shot learning and one-class classification (OCC) in a class-balanced test data set.
This is done by explicitly optimizing steps with one-class minibatches to yield a performance increase on class-balanced test data.
- Score: 22.548520862073023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although few-shot learning and one-class classification (OCC), i.e., learning
a binary classifier with data from only one class, have been separately well
studied, their intersection remains rather unexplored. Our work addresses the
few-shot OCC problem and presents a method to modify the episodic data sampling
strategy of the model-agnostic meta-learning (MAML) algorithm to learn a model
initialization particularly suited for learning few-shot OCC tasks. This is
done by explicitly optimizing for an initialization which only requires few
gradient steps with one-class minibatches to yield a performance increase on
class-balanced test data. We provide a theoretical analysis that explains why
our approach works in the few-shot OCC scenario, while other meta-learning
algorithms fail, including the unmodified MAML. Our experiments on eight
datasets from the image and time-series domains show that our method leads to
better results than classical OCC and few-shot classification approaches, and
demonstrate the ability to learn unseen tasks from only few normal class
samples. Moreover, we successfully train anomaly detectors for a real-world
application on sensor readings recorded during industrial manufacturing of
workpieces with a CNC milling machine, by using few normal examples. Finally,
we empirically demonstrate that the proposed data sampling technique increases
the performance of more recent meta-learning algorithms in few-shot OCC and
yields state-of-the-art results in this problem setting.
Related papers
- Model-Agnostic Multitask Fine-tuning for Few-shot Vision-Language
Transfer Learning [59.38343286807997]
We propose Model-Agnostic Multitask Fine-tuning (MAMF) for vision-language models on unseen tasks.
Compared with model-agnostic meta-learning (MAML), MAMF discards the bi-level optimization and uses only first-order gradients.
We show that MAMF consistently outperforms the classical fine-tuning method for few-shot transfer learning on five benchmark datasets.
arXiv Detail & Related papers (2022-03-09T17:26:53Z) - Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain,
Active and Continual Few-Shot Learning [41.07029317930986]
We propose a variance-sensitive class of models that operates in a low-label regime.
The first method, Simple CNAPS, employs a hierarchically regularized Mahalanobis-distance based classifier.
We further extend this approach to a transductive learning setting, proposing Transductive CNAPS.
arXiv Detail & Related papers (2022-01-13T18:59:02Z) - Memory-Based Optimization Methods for Model-Agnostic Meta-Learning and
Personalized Federated Learning [56.17603785248675]
Model-agnostic meta-learning (MAML) has become a popular research area.
Existing MAML algorithms rely on the episode' idea by sampling a few tasks and data points to update the meta-model at each iteration.
This paper proposes memory-based algorithms for MAML that converge with vanishing error.
arXiv Detail & Related papers (2021-06-09T08:47:58Z) - Meta-learning One-class Classifiers with Eigenvalue Solvers for
Supervised Anomaly Detection [55.888835686183995]
We propose a neural network-based meta-learning method for supervised anomaly detection.
We experimentally demonstrate that the proposed method achieves better performance than existing anomaly detection and few-shot learning methods.
arXiv Detail & Related papers (2021-03-01T01:43:04Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - A Primal-Dual Subgradient Approachfor Fair Meta Learning [23.65344558042896]
Few shot meta-learning is well-known with its fast-adapted capability and accuracy generalization onto unseen tasks.
We propose a Primal-Dual Fair Meta-learning framework, namely PDFM, which learns to train fair machine learning models using only a few examples.
arXiv Detail & Related papers (2020-09-26T19:47:38Z) - Few-shot Classification via Adaptive Attention [93.06105498633492]
We propose a novel few-shot learning method via optimizing and fast adapting the query sample representation based on very few reference samples.
As demonstrated experimentally, the proposed model achieves state-of-the-art classification results on various benchmark few-shot classification and fine-grained recognition datasets.
arXiv Detail & Related papers (2020-08-06T05:52:59Z) - Self-Supervised Prototypical Transfer Learning for Few-Shot
Classification [11.96734018295146]
Self-supervised transfer learning approach ProtoTransfer outperforms state-of-the-art unsupervised meta-learning methods on few-shot tasks.
In few-shot experiments with domain shift, our approach even has comparable performance to supervised methods, but requires orders of magnitude fewer labels.
arXiv Detail & Related papers (2020-06-19T19:00:11Z) - One-Shot Object Detection without Fine-Tuning [62.39210447209698]
We introduce a two-stage model consisting of a first stage Matching-FCOS network and a second stage Structure-Aware Relation Module.
We also propose novel training strategies that effectively improve detection performance.
Our method exceeds the state-of-the-art one-shot performance consistently on multiple datasets.
arXiv Detail & Related papers (2020-05-08T01:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.