AdaBoost is not an Optimal Weak to Strong Learner
- URL: http://arxiv.org/abs/2301.11571v1
- Date: Fri, 27 Jan 2023 07:37:51 GMT
- Title: AdaBoost is not an Optimal Weak to Strong Learner
- Authors: Mikael M{\o}ller H{\o}gsgaard, Kasper Green Larsen, Martin Ritzert
- Abstract summary: We show that the sample complexity of AdaBoost, and other classic variations thereof, are sub-optimal by at least one logarithmic factor in the desired accuracy of the strong learner.
- Score: 11.003568749905359
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AdaBoost is a classic boosting algorithm for combining multiple inaccurate
classifiers produced by a weak learner, to produce a strong learner with
arbitrarily high accuracy when given enough training data. Determining the
optimal number of samples necessary to obtain a given accuracy of the strong
learner, is a basic learning theoretic question. Larsen and Ritzert
(NeurIPS'22) recently presented the first provably optimal weak-to-strong
learner. However, their algorithm is somewhat complicated and it remains an
intriguing question whether the prototypical boosting algorithm AdaBoost also
makes optimal use of training samples. In this work, we answer this question in
the negative. Concretely, we show that the sample complexity of AdaBoost, and
other classic variations thereof, are sub-optimal by at least one logarithmic
factor in the desired accuracy of the strong learner.
Related papers
- How to Boost Any Loss Function [63.573324901948716]
We show that loss functions can be efficiently optimized with boosting.
We show that boosting can achieve a feat not yet known to be possible in the classical $0th$ order setting.
arXiv Detail & Related papers (2024-07-02T14:08:23Z) - When Analytic Calculus Cracks AdaBoost Code [0.30693357740321775]
This study analyzes the (two classes) AdaBoost procedure implemented in scikit-learn.
AdaBoost is an algorithm in name only, as the resulting combination of weak classifiers can be explicitly calculated using a truth table.
We observe that this formula does not give the point of minimum of the risk, we provide a system to compute the exact point of minimum and we check that the AdaBoost procedure in scikit-learn does not implement the algorithm described by Freund and Schapire.
arXiv Detail & Related papers (2023-08-02T10:37:25Z) - Multiclass Boosting: Simple and Intuitive Weak Learning Criteria [72.71096438538254]
We give a simple and efficient boosting algorithm, that does not require realizability assumptions.
We present a new result on boosting for list learners, as well as provide a novel proof for the characterization of multiclass PAC learning.
arXiv Detail & Related papers (2023-07-02T19:26:58Z) - PromptBoosting: Black-Box Text Classification with Ten Forward Passes [61.38341243907045]
We describe PromptBoosting, a query-efficient procedure for building a text classifier from a neural language model (LM) without access to the LM's parameters, gradients, or hidden representations.
Experiments show that PromptBoosting achieves state-of-the-art performance in multiple black-box few-shot classification tasks, and matches or outperforms full fine-tuning in both few-shot and standard learning paradigms, while training 10x faster than existing black-box methods.
arXiv Detail & Related papers (2022-12-19T06:04:54Z) - ProBoost: a Boosting Method for Probabilistic Classifiers [55.970609838687864]
ProBoost is a new boosting algorithm for probabilistic classifiers.
It uses the uncertainty of each training sample to determine the most challenging/uncertain ones.
It produces a sequence that progressively focuses on the samples found to have the highest uncertainty.
arXiv Detail & Related papers (2022-09-04T12:49:20Z) - Optimal Weak to Strong Learning [12.999258817707412]
We present a new algorithm that constructs a strong learner from a weak learner but uses less training data than AdaBoost.
A sample complexity lower bound shows that our new algorithm uses the minimum possible amount of training data and is thus optimal.
arXiv Detail & Related papers (2022-06-03T13:37:12Z) - A Boosting Approach to Reinforcement Learning [59.46285581748018]
We study efficient algorithms for reinforcement learning in decision processes whose complexity is independent of the number of states.
We give an efficient algorithm that is capable of improving the accuracy of such weak learning methods.
arXiv Detail & Related papers (2021-08-22T16:00:45Z) - BiAdam: Fast Adaptive Bilevel Optimization Methods [104.96004056928474]
Bilevel optimization has attracted increased interest in machine learning due to its many applications.
We provide a useful analysis framework for both the constrained and unconstrained optimization.
arXiv Detail & Related papers (2021-06-21T20:16:40Z) - MP-Boost: Minipatch Boosting via Adaptive Feature and Observation
Sampling [0.0]
MP-Boost is an algorithm loosely based on AdaBoost that learns by adaptively selecting small subsets of instances and features.
We empirically demonstrate the interpretability, comparative accuracy, and computational time of our approach on a variety of binary classification tasks.
arXiv Detail & Related papers (2020-11-14T04:26:13Z) - Improved Quantum Boosting [1.0660480034605238]
Arunachalam and Maity gave the first quantum improvement for boosting, by combining Freund and Schapire's AdaBoost algorithm with a quantum algorithm for approximate counting.
In this paper we give a substantially faster and simpler quantum boosting algorithm, based on Servedio's SmoothBoost algorithm.
arXiv Detail & Related papers (2020-09-17T15:16:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.