The Impossibility of Parallelizing Boosting
- URL: http://arxiv.org/abs/2301.09627v3
- Date: Mon, 21 Aug 2023 09:11:40 GMT
- Title: The Impossibility of Parallelizing Boosting
- Authors: Amin Karbasi, Kasper Green Larsen
- Abstract summary: We investigate the possibility of parallelizing boosting.
Our main contribution is a strong negative result, implying that significant parallelization of boosting requires an exponential blow-up in the total computing resources needed for training.
- Score: 56.24620522825757
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The aim of boosting is to convert a sequence of weak learners into a strong
learner. At their heart, these methods are fully sequential. In this paper, we
investigate the possibility of parallelizing boosting. Our main contribution is
a strong negative result, implying that significant parallelization of boosting
requires an exponential blow-up in the total computing resources needed for
training.
Related papers
- How to Boost Any Loss Function [63.573324901948716]
We show that any loss function can be optimized with boosting.
We also show that boosting can achieve a feat not yet known to be possible in the classical $0th$ order setting.
arXiv Detail & Related papers (2024-07-02T14:08:23Z) - Multiclass Boosting: Simple and Intuitive Weak Learning Criteria [72.71096438538254]
We give a simple and efficient boosting algorithm, that does not require realizability assumptions.
We present a new result on boosting for list learners, as well as provide a novel proof for the characterization of multiclass PAC learning.
arXiv Detail & Related papers (2023-07-02T19:26:58Z) - AdaBoost is not an Optimal Weak to Strong Learner [11.003568749905359]
We show that the sample complexity of AdaBoost, and other classic variations thereof, are sub-optimal by at least one logarithmic factor in the desired accuracy of the strong learner.
arXiv Detail & Related papers (2023-01-27T07:37:51Z) - Quantum Boosting using Domain-Partitioning Hypotheses [0.9264464791978363]
Boosting is an ensemble learning method that converts a weak learner into a strong learner in the PAC learning framework.
We show that Q-RealBoost provides a speedup over Q-AdaBoost in terms of both the bias of the weak learner and the time taken by the weak learner to learn the target concept class.
arXiv Detail & Related papers (2021-10-25T10:46:13Z) - MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven
Reinforcement Learning [65.52675802289775]
We show that an uncertainty aware classifier can solve challenging reinforcement learning problems.
We propose a novel method for computing the normalized maximum likelihood (NML) distribution.
We show that the resulting algorithm has a number of intriguing connections to both count-based exploration methods and prior algorithms for learning reward functions.
arXiv Detail & Related papers (2021-07-15T08:19:57Z) - The Success of AdaBoost and Its Application in Portfolio Management [0.6562256987706128]
We develop a novel approach to explain why AdaBoost is a successful classifier.
By introducing a measure of the influence of the noise points (ION) in the training data for the binary classification problem, we prove that there is a strong connection between the ION and the test error.
We apply AdaBoost in portfolio management via empirical studies in the Chinese market, which corroborates our theoretical propositions.
arXiv Detail & Related papers (2021-03-23T06:41:42Z) - ADABOOK & MULTIBOOK: Adaptive Boosting with Chance Correction [3.7819322027528113]
It is possible for a weak learner to optimize Accuracy to the detriment of the more reaslistic chance-corrected measures, and when this happens the booster can give up too early.
This paper thus complements the theoretical work showing the necessity of using chance-corrected measures for evaluation, with empirical work showing how use of a chance-corrected measure can improve boosting.
arXiv Detail & Related papers (2020-10-11T01:17:32Z) - Bridging the Imitation Gap by Adaptive Insubordination [88.35564081175642]
We show that when the teaching agent makes decisions with access to privileged information, this information is marginalized during imitation learning.
We propose 'Adaptive Insubordination' (ADVISOR) to address this gap.
ADVISOR dynamically weights imitation and reward-based reinforcement learning losses during training, enabling on-the-fly switching between imitation and exploration.
arXiv Detail & Related papers (2020-07-23T17:59:57Z) - Boosting algorithms in energy research: A systematic review [0.0]
Boosting algorithms are characterized by both high flexibility and high interpretability.
We show that boosting has been underexploited so far, while great advances in the energy field are possible.
arXiv Detail & Related papers (2020-04-01T18:03:26Z) - On the Dual Formulation of Boosting Algorithms [92.74617630106559]
We show that the Lagrange problems of AdaBoost, LogitBoost and soft-marginBoost are all dual problems with generalized hinge loss entropy.
By looking at the dual problems of these boosting algorithms, we show that the success of boosting can be understood in terms of maintaining a better margin distribution.
arXiv Detail & Related papers (2009-01-23T02:14:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.