Optimal Weak to Strong Learning
- URL: http://arxiv.org/abs/2206.01563v1
- Date: Fri, 3 Jun 2022 13:37:12 GMT
- Title: Optimal Weak to Strong Learning
- Authors: Kasper Green Larsen, Martin Ritzert
- Abstract summary: We present a new algorithm that constructs a strong learner from a weak learner but uses less training data than AdaBoost.
A sample complexity lower bound shows that our new algorithm uses the minimum possible amount of training data and is thus optimal.
- Score: 12.999258817707412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The classic algorithm AdaBoost allows to convert a weak learner, that is an
algorithm that produces a hypothesis which is slightly better than chance, into
a strong learner, achieving arbitrarily high accuracy when given enough
training data. We present a new algorithm that constructs a strong learner from
a weak learner but uses less training data than AdaBoost and all other weak to
strong learners to achieve the same generalization bounds. A sample complexity
lower bound shows that our new algorithm uses the minimum possible amount of
training data and is thus optimal. Hence, this work settles the sample
complexity of the classic problem of constructing a strong learner from a weak
learner.
Related papers
- The Many Faces of Optimal Weak-to-Strong Learning [10.985323882432086]
We present a new and surprisingly simple Boosting algorithm that obtains a provably optimal sample complexity.
Our pilot empirical study suggests that our new algorithm might outperform previous algorithms on large data sets.
arXiv Detail & Related papers (2024-08-30T09:38:51Z) - Achieving More with Less: A Tensor-Optimization-Powered Ensemble Method [53.170053108447455]
Ensemble learning is a method that leverages weak learners to produce a strong learner.
We design a smooth and convex objective function that leverages the concept of margin, making the strong learner more discriminative.
We then compare our algorithm with random forests of ten times the size and other classical methods across numerous datasets.
arXiv Detail & Related papers (2024-08-06T03:42:38Z) - Multiclass Boosting: Simple and Intuitive Weak Learning Criteria [72.71096438538254]
We give a simple and efficient boosting algorithm, that does not require realizability assumptions.
We present a new result on boosting for list learners, as well as provide a novel proof for the characterization of multiclass PAC learning.
arXiv Detail & Related papers (2023-07-02T19:26:58Z) - AdaBoost is not an Optimal Weak to Strong Learner [11.003568749905359]
We show that the sample complexity of AdaBoost, and other classic variations thereof, are sub-optimal by at least one logarithmic factor in the desired accuracy of the strong learner.
arXiv Detail & Related papers (2023-01-27T07:37:51Z) - A Moment-Matching Approach to Testable Learning and a New
Characterization of Rademacher Complexity [15.746613321517282]
We give a powerful new approach for developing algorithms for testable learning using tools from moment matching and metric agnostic in probability.
Surprisingly, we show that the information-theoretic complexity of testable learning is tightly characterized by the Rademacher complexity of the concept class.
arXiv Detail & Related papers (2022-11-23T21:29:51Z) - Adaptive Federated Minimax Optimization with Lower Complexities [82.51223883622552]
We propose an efficient adaptive minimax optimization algorithm (i.e., AdaFGDA) to solve these minimax problems.
It builds our momentum-based reduced and localSGD techniques, and it flexibly incorporate various adaptive learning rates.
arXiv Detail & Related papers (2022-11-14T12:32:18Z) - A Boosting Approach to Reinforcement Learning [59.46285581748018]
We study efficient algorithms for reinforcement learning in decision processes whose complexity is independent of the number of states.
We give an efficient algorithm that is capable of improving the accuracy of such weak learning methods.
arXiv Detail & Related papers (2021-08-22T16:00:45Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Provably Efficient Exploration for Reinforcement Learning Using
Unsupervised Learning [96.78504087416654]
Motivated by the prevailing paradigm of using unsupervised learning for efficient exploration in reinforcement learning (RL) problems, we investigate when this paradigm is provably efficient.
We present a general algorithmic framework that is built upon two components: an unsupervised learning algorithm and a noregret tabular RL algorithm.
arXiv Detail & Related papers (2020-03-15T19:23:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.