A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge
- URL: http://arxiv.org/abs/2207.14443v2
- Date: Tue, 6 Jun 2023 15:44:14 GMT
- Title: A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge
- Authors: Xiaofeng Cao, Weixin Bu, Shengjun Huang, Minling Zhang, Ivor W. Tsang,
Yew Soon Ong, and James T. Kwok
- Abstract summary: Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
- Score: 101.27154181792567
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning on big data brings success for artificial intelligence (AI), but the
annotation and training costs are expensive. In future, learning on small data
that approximates the generalization ability of big data is one of the ultimate
purposes of AI, which requires machines to recognize objectives and scenarios
relying on small data as humans. A series of learning topics is going on this
way such as active learning and few-shot learning. However, there are few
theoretical guarantees for their generalization performance. Moreover, most of
their settings are passive, that is, the label distribution is explicitly
controlled by finite training resources from known distributions. This survey
follows the agnostic active sampling theory under a PAC (Probably Approximately
Correct) framework to analyze the generalization error and label complexity of
learning on small data in model-agnostic supervised and unsupervised fashion.
Considering multiple learning communities could produce small data
representation and related topics have been well surveyed, we thus subjoin
novel geometric representation perspectives for small data: the Euclidean and
non-Euclidean (hyperbolic) mean, where the optimization solutions including the
Euclidean gradients, non-Euclidean gradients, and Stein gradient are presented
and discussed. Later, multiple learning communities that may be improved by
learning on small data are summarized, which yield data-efficient
representations, such as transfer learning, contrastive learning, graph
representation learning. Meanwhile, we find that the meta-learning may provide
effective parameter update policies for learning on small data. Then, we
explore multiple challenging scenarios for small data, such as the weak
supervision and multi-label. Finally, multiple data applications that may
benefit from efficient small data representation are surveyed.
Related papers
- Curriculum Learning for Graph Neural Networks: Which Edges Should We
Learn First [13.37867275976255]
We propose a novel strategy to incorporate more edges into training according to their difficulty from easy to hard.
We demonstrate the strength of our proposed method in improving the generalization ability and robustness of learned representations.
arXiv Detail & Related papers (2023-10-28T15:35:34Z) - Exploring the Boundaries of Semi-Supervised Facial Expression
Recognition: Learning from In-Distribution, Out-of-Distribution, and
Unconstrained Data [19.442685015494316]
We present a study on 11 of the most recent semi-supervised methods, in the context of facial expression recognition (FER)
Our investigation covers semi-supervised learning from in-distribution, out-of-distribution, unconstrained, and very small unlabelled data.
Our results demonstrate that FixMatch consistently achieves better performance on in-distribution unlabelled data, while ReMixMatch stands out among all methods for out-of-distribution, unconstrained, and scarce unlabelled data scenarios.
arXiv Detail & Related papers (2023-06-02T01:40:08Z) - Zero-shot meta-learning for small-scale data from human subjects [10.320654885121346]
We develop a framework to rapidly adapt to a new prediction task with limited training data for out-of-sample test data.
Our model learns the latent treatment effects of each intervention and, by design, can naturally handle multi-task predictions.
Our model has implications for improved generalization of small-size human studies to the wider population.
arXiv Detail & Related papers (2022-03-29T17:42:04Z) - Learning from Few Examples: A Summary of Approaches to Few-Shot Learning [3.6930948691311016]
Few-Shot Learning refers to the problem of learning the underlying pattern in the data just from a few training samples.
Deep learning solutions suffer from data hunger and extensively high computation time and resources.
Few-shot learning that could drastically reduce the turnaround time of building machine learning applications emerges as a low-cost solution.
arXiv Detail & Related papers (2022-03-07T23:15:21Z) - CvS: Classification via Segmentation For Small Datasets [52.821178654631254]
This paper presents CvS, a cost-effective classifier for small datasets that derives the classification labels from predicting the segmentation maps.
We evaluate the effectiveness of our framework on diverse problems showing that CvS is able to achieve much higher classification results compared to previous methods when given only a handful of examples.
arXiv Detail & Related papers (2021-10-29T18:41:15Z) - Enhancing ensemble learning and transfer learning in multimodal data
analysis by adaptive dimensionality reduction [10.646114896709717]
In multimodal data analysis, not all observations would show the same level of reliability or information quality.
We propose an adaptive approach for dimensionality reduction to overcome this issue.
We test our approach on multimodal datasets acquired in diverse research fields.
arXiv Detail & Related papers (2021-05-08T11:53:12Z) - Graph Embedding with Data Uncertainty [113.39838145450007]
spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines.
Most subspace learning methods do not take into consideration possible measurement inaccuracies or artifacts that can lead to data with high uncertainty.
arXiv Detail & Related papers (2020-09-01T15:08:23Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z) - Learning to Count in the Crowd from Limited Labeled Data [109.2954525909007]
We focus on reducing the annotation efforts by learning to count in the crowd from limited number of labeled samples.
Specifically, we propose a Gaussian Process-based iterative learning mechanism that involves estimation of pseudo-ground truth for the unlabeled data.
arXiv Detail & Related papers (2020-07-07T04:17:01Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.