Demystification of Few-shot and One-shot Learning
- URL: http://arxiv.org/abs/2104.12174v1
- Date: Sun, 25 Apr 2021 14:47:05 GMT
- Title: Demystification of Few-shot and One-shot Learning
- Authors: Ivan Y. Tyukin, Alexander N. Gorban, Muhammad H. Alkhudaydi, Qinghua
Zhou
- Abstract summary: Few-shot and one-shot learning have been the subject of active and intensive research in recent years.
We show that if the ambient or latent decision space of a learning machine is sufficiently high-dimensional than a large class of objects in this space can indeed be easily learned from few examples.
- Score: 63.58514532659252
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot and one-shot learning have been the subject of active and intensive
research in recent years, with mounting evidence pointing to successful
implementation and exploitation of few-shot learning algorithms in practice.
Classical statistical learning theories do not fully explain why few- or
one-shot learning is at all possible since traditional generalisation bounds
normally require large training and testing samples to be meaningful. This
sharply contrasts with numerous examples of successful one- and few-shot
learning systems and applications.
In this work we present mathematical foundations for a theory of one-shot and
few-shot learning and reveal conditions specifying when such learning schemes
are likely to succeed. Our theory is based on intrinsic properties of
high-dimensional spaces. We show that if the ambient or latent decision space
of a learning machine is sufficiently high-dimensional than a large class of
objects in this space can indeed be easily learned from few examples provided
that certain data non-concentration conditions are met.
Related papers
- Contrastive Learning and Abstract Concepts: The Case of Natural Numbers [0.0]
We show that contrastive learning can be trained to count at a glance with high accuracy both at human as well as at super-human ranges.
We compare this with the results of a trained-to-count at a glance supervised learning (SL) neural network scheme of similar architecture.
arXiv Detail & Related papers (2024-08-05T05:41:16Z) - Active Learning Principles for In-Context Learning with Large Language
Models [65.09970281795769]
This paper investigates how Active Learning algorithms can serve as effective demonstration selection methods for in-context learning.
We show that in-context example selection through AL prioritizes high-quality examples that exhibit low uncertainty and bear similarity to the test examples.
arXiv Detail & Related papers (2023-05-23T17:16:04Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - Realizable Learning is All You Need [21.34668631009594]
equivalence of realizable and agnostic learnability is a fundamental phenomenon in learning theory.
We give the first model-independent framework explaining the equivalence of realizable and agnostic learnability.
arXiv Detail & Related papers (2021-11-08T19:00:00Z) - Exploring Adversarial Examples for Efficient Active Learning in Machine
Learning Classifiers [17.90617023533039]
We first add particular perturbation to original training examples using adversarial attack methods.
We then investigate the connections between active learning and these particular training examples.
Results show that the established theoretical foundation will guide better active learning strategies based on adversarial examples.
arXiv Detail & Related papers (2021-09-22T14:51:26Z) - Self-training with Few-shot Rationalization: Teacher Explanations Aid
Student in Few-shot NLU [88.8401599172922]
We develop a framework based on self-training language models with limited task-specific labels and rationales.
We show that the neural model performance can be significantly improved by making it aware of its rationalized predictions.
arXiv Detail & Related papers (2021-09-17T00:36:46Z) - LibFewShot: A Comprehensive Library for Few-shot Learning [78.58842209282724]
Few-shot learning, especially few-shot image classification, has received increasing attention and witnessed significant advances in recent years.
Some recent studies implicitly show that many generic techniques or tricks, such as data augmentation, pre-training, knowledge distillation, and self-supervision, may greatly boost the performance of a few-shot learning method.
We propose a comprehensive library for few-shot learning (LibFewShot) by re-implementing seventeen state-of-the-art few-shot learning methods in a unified framework with the same single intrinsic in PyTorch.
arXiv Detail & Related papers (2021-09-10T14:12:37Z) - A Low Rank Promoting Prior for Unsupervised Contrastive Learning [108.91406719395417]
We construct a novel probabilistic graphical model that effectively incorporates the low rank promoting prior into the framework of contrastive learning.
Our hypothesis explicitly requires that all the samples belonging to the same instance class lie on the same subspace with small dimension.
Empirical evidences show that the proposed algorithm clearly surpasses the state-of-the-art approaches on multiple benchmarks.
arXiv Detail & Related papers (2021-08-05T15:58:25Z) - What Can Knowledge Bring to Machine Learning? -- A Survey of Low-shot
Learning for Structured Data [11.531353877970547]
Low-shot learning allows the model to obtain good predictive power with very little or no training data.
Structured knowledge plays a key role as a high-level semantic representation of human.
arXiv Detail & Related papers (2021-06-11T14:07:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.