Few-shot Classification via Ensemble Learning with Multi-Order
Statistics
- URL: http://arxiv.org/abs/2305.00454v1
- Date: Sun, 30 Apr 2023 11:41:01 GMT
- Title: Few-shot Classification via Ensemble Learning with Multi-Order
Statistics
- Authors: Sai Yang, Fan Liu, Delong Chen, Jun Zhou
- Abstract summary: We show that leveraging ensemble learning on the base classes can correspondingly reduce the true error in the novel classes.
A novel method named Ensemble Learning with Multi-Order Statistics (ELMOS) is proposed in this paper.
We show that our method can produce a state-of-the-art performance on multiple few-shot classification benchmark datasets.
- Score: 9.145742362513932
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transfer learning has been widely adopted for few-shot classification. Recent
studies reveal that obtaining good generalization representation of images on
novel classes is the key to improving the few-shot classification accuracy. To
address this need, we prove theoretically that leveraging ensemble learning on
the base classes can correspondingly reduce the true error in the novel
classes. Following this principle, a novel method named Ensemble Learning with
Multi-Order Statistics (ELMOS) is proposed in this paper. In this method, after
the backbone network, we use multiple branches to create the individual
learners in the ensemble learning, with the goal to reduce the storage cost. We
then introduce different order statistics pooling in each branch to increase
the diversity of the individual learners. The learners are optimized with
supervised losses during the pre-training phase. After pre-training, features
from different branches are concatenated for classifier evaluation. Extensive
experiments demonstrate that each branch can complement the others and our
method can produce a state-of-the-art performance on multiple few-shot
classification benchmark datasets.
Related papers
- Achieving More with Less: A Tensor-Optimization-Powered Ensemble Method [53.170053108447455]
Ensemble learning is a method that leverages weak learners to produce a strong learner.
We design a smooth and convex objective function that leverages the concept of margin, making the strong learner more discriminative.
We then compare our algorithm with random forests of ten times the size and other classical methods across numerous datasets.
arXiv Detail & Related papers (2024-08-06T03:42:38Z) - Low-Cost Self-Ensembles Based on Multi-Branch Transformation and Grouped Convolution [20.103367702014474]
We propose a new low-cost ensemble learning to achieve high efficiency and classification performance.
For training, we employ knowledge distillation using the ensemble of the outputs as the teacher signal.
Experimental results show that our method achieves state-of-the-art classification accuracy and higher uncertainty estimation performance.
arXiv Detail & Related papers (2024-08-05T08:36:13Z) - Generalization Bounds for Few-Shot Transfer Learning with Pretrained
Classifiers [26.844410679685424]
We study the ability of foundation models to learn representations for classification that are transferable to new, unseen classes.
We show that the few-shot error of the learned feature map on new classes is small in case of class-feature-variability collapse.
arXiv Detail & Related papers (2022-12-23T18:46:05Z) - A Cross-Conformal Predictor for Multi-label Classification [0.0]
In multi-label learning each instance is associated with multiple classes simultaneously.
This work examines the application of a recently developed framework called Conformal Prediction to the multi-label learning setting.
arXiv Detail & Related papers (2022-11-29T14:21:49Z) - Class-Incremental Learning with Strong Pre-trained Models [97.84755144148535]
Class-incremental learning (CIL) has been widely studied under the setting of starting from a small number of classes (base classes)
We explore an understudied real-world setting of CIL that starts with a strong model pre-trained on a large number of base classes.
Our proposed method is robust and generalizes to all analyzed CIL settings.
arXiv Detail & Related papers (2022-04-07T17:58:07Z) - Evolving Multi-Label Fuzzy Classifier [5.53329677986653]
Multi-label classification has attracted much attention in the machine learning community to address the problem of assigning single samples to more than one class at the same time.
We propose an evolving multi-label fuzzy classifier (EFC-ML) which is able to self-adapt and self-evolve its structure with new incoming multi-label samples in an incremental, single-pass manner.
arXiv Detail & Related papers (2022-03-29T08:01:03Z) - Multi-Class Classification from Single-Class Data with Confidences [90.48669386745361]
We propose an empirical risk minimization framework that is loss-/model-/optimizer-independent.
We show that our method can be Bayes-consistent with a simple modification even if the provided confidences are highly noisy.
arXiv Detail & Related papers (2021-06-16T15:38:13Z) - Shot in the Dark: Few-Shot Learning with No Base-Class Labels [32.96824710484196]
We show that off-the-shelf self-supervised learning outperforms transductive few-shot methods by 3.9% for 5-shot accuracy on miniImageNet.
This motivates us to examine more carefully the role of features learned through self-supervision in few-shot learning.
arXiv Detail & Related papers (2020-10-06T02:05:27Z) - Learning by Minimizing the Sum of Ranked Range [58.24935359348289]
We introduce the sum of ranked range (SoRR) as a general approach to form learning objectives.
A ranked range is a consecutive sequence of sorted values of a set of real numbers.
We explore two applications in machine learning of the minimization of the SoRR framework, namely the AoRR aggregate loss for binary classification and the TKML individual loss for multi-label/multi-class classification.
arXiv Detail & Related papers (2020-10-05T01:58:32Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Few-Shot Learning with Intra-Class Knowledge Transfer [100.87659529592223]
We consider the few-shot classification task with an unbalanced dataset.
Recent works have proposed to solve this task by augmenting the training data of the few-shot classes using generative models.
We propose to leverage the intra-class knowledge from the neighbor many-shot classes with the intuition that neighbor classes share similar statistical information.
arXiv Detail & Related papers (2020-08-22T18:15:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.