Fast ABC-Boost: A Unified Framework for Selecting the Base Class in
Multi-Class Classification
- URL: http://arxiv.org/abs/2205.10927v1
- Date: Sun, 22 May 2022 20:42:26 GMT
- Title: Fast ABC-Boost: A Unified Framework for Selecting the Base Class in
Multi-Class Classification
- Authors: Ping Li and Weijie Zhao
- Abstract summary: We develop a unified framework for effectively selecting the base class by introducing a series of ideas to improve the computational efficiency of ABC-Boost.
Our framework has parameters $(s,g,w)$.
- Score: 21.607059258448594
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The work in ICML'09 showed that the derivatives of the classical multi-class
logistic regression loss function could be re-written in terms of a pre-chosen
"base class" and applied the new derivatives in the popular boosting framework.
In order to make use of the new derivatives, one must have a strategy to
identify/choose the base class at each boosting iteration. The idea of
"adaptive base class boost" (ABC-Boost) in ICML'09, adopted a computationally
expensive "exhaustive search" strategy for the base class at each iteration. It
has been well demonstrated that ABC-Boost, when integrated with trees, can
achieve substantial improvements in many multi-class classification tasks.
Furthermore, the work in UAI'10 derived the explicit second-order tree split
gain formula which typically improved the classification accuracy considerably,
compared with using only the fist-order information for tree-splitting, for
both multi-class and binary-class classification tasks. In this paper, we
develop a unified framework for effectively selecting the base class by
introducing a series of ideas to improve the computational efficiency of
ABC-Boost. Our framework has parameters $(s,g,w)$. At each boosting iteration,
we only search for the "$s$-worst classes" (instead of all classes) to
determine the base class. We also allow a "gap" $g$ when conducting the search.
That is, we only search for the base class at every $g+1$ iterations. We
furthermore allow a "warm up" stage by only starting the search after $w$
boosting iterations. The parameters $s$, $g$, $w$, can be viewed as tunable
parameters and certain combinations of $(s,g,w)$ may even lead to better test
accuracy than the "exhaustive search" strategy. Overall, our proposed framework
provides a robust and reliable scheme for implementing ABC-Boost in practice.
Related papers
- Achieving More with Less: A Tensor-Optimization-Powered Ensemble Method [53.170053108447455]
Ensemble learning is a method that leverages weak learners to produce a strong learner.
We design a smooth and convex objective function that leverages the concept of margin, making the strong learner more discriminative.
We then compare our algorithm with random forests of ten times the size and other classical methods across numerous datasets.
arXiv Detail & Related papers (2024-08-06T03:42:38Z) - A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation [121.0693322732454]
Contrastive Language-Image Pretraining (CLIP) has gained popularity for its remarkable zero-shot capacity.
Recent research has focused on developing efficient fine-tuning methods to enhance CLIP's performance in downstream tasks.
We revisit a classical algorithm, Gaussian Discriminant Analysis (GDA), and apply it to the downstream classification of CLIP.
arXiv Detail & Related papers (2024-02-06T15:45:27Z) - Few-Shot Class-Incremental Learning via Training-Free Prototype
Calibration [67.69532794049445]
We find a tendency for existing methods to misclassify the samples of new classes into base classes, which leads to the poor performance of new classes.
We propose a simple yet effective Training-frEE calibratioN (TEEN) strategy to enhance the discriminability of new classes.
arXiv Detail & Related papers (2023-12-08T18:24:08Z) - Multiclass Boosting: Simple and Intuitive Weak Learning Criteria [72.71096438538254]
We give a simple and efficient boosting algorithm, that does not require realizability assumptions.
We present a new result on boosting for list learners, as well as provide a novel proof for the characterization of multiclass PAC learning.
arXiv Detail & Related papers (2023-07-02T19:26:58Z) - RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the
BERT Model Boosted by an Improved ABC Algorithm [6.82469220191368]
Answer selection (AS) is a critical subtask of the open-domain question answering (QA) problem.
The present paper proposes a method called RLAS-BIABC for AS, which is established on attention mechanism-based long short-term memory (LSTM) and the bidirectional encoder representations from transformers (BERT) word embedding.
arXiv Detail & Related papers (2023-01-07T08:33:05Z) - Package for Fast ABC-Boost [21.607059258448594]
This report presents the open-source package which implements the series of our boosting works in the past years.
The histogram-based (feature-binning) approach makes the tree implementation convenient and efficient.
The explicit gain formula in Li (20010) for tree splitting based on second-order derivatives of the loss function typically improves.
arXiv Detail & Related papers (2022-07-18T17:22:32Z) - APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic
Segmentation [56.387647750094466]
Few-shot semantic segmentation aims to segment novel-class objects in a given query image with only a few labeled support images.
Most advanced solutions exploit a metric learning framework that performs segmentation through matching each query feature to a learned class-specific prototype.
We present an adaptive prototype representation by introducing class-specific and class-agnostic prototypes.
arXiv Detail & Related papers (2021-11-24T04:38:37Z) - Multiple Classifiers Based Maximum Classifier Discrepancy for
Unsupervised Domain Adaptation [25.114533037440896]
We propose to extend the structure of two classifiers to multiple classifiers to further boost its performance.
We demonstrate that, on average, adopting the structure of three classifiers normally yields the best performance as a trade-off between the accuracy and efficiency.
arXiv Detail & Related papers (2021-08-02T03:00:13Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - BoostTree and BoostForest for Ensemble Learning [27.911350375268576]
BoostForest is an ensemble learning approach using BoostTree as base learners and can be used for both classification and regression.
It generally outperformed four classical ensemble learning approaches (Random Forest, Extra-Trees, XGBoost and LightGBM) on 35 classification and regression datasets.
arXiv Detail & Related papers (2020-03-21T19:52:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.