The Success of AdaBoost and Its Application in Portfolio Management
- URL: http://arxiv.org/abs/2103.12345v1
- Date: Tue, 23 Mar 2021 06:41:42 GMT
- Title: The Success of AdaBoost and Its Application in Portfolio Management
- Authors: Yijian Chuan, Chaoyi Zhao, Zhenrui He, and Lan Wu
- Abstract summary: We develop a novel approach to explain why AdaBoost is a successful classifier.
By introducing a measure of the influence of the noise points (ION) in the training data for the binary classification problem, we prove that there is a strong connection between the ION and the test error.
We apply AdaBoost in portfolio management via empirical studies in the Chinese market, which corroborates our theoretical propositions.
- Score: 0.6562256987706128
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop a novel approach to explain why AdaBoost is a successful
classifier. By introducing a measure of the influence of the noise points (ION)
in the training data for the binary classification problem, we prove that there
is a strong connection between the ION and the test error. We further identify
that the ION of AdaBoost decreases as the iteration number or the complexity of
the base learners increases. We confirm that it is impossible to obtain a
consistent classifier without deep trees as the base learners of AdaBoost in
some complicated situations. We apply AdaBoost in portfolio management via
empirical studies in the Chinese market, which corroborates our theoretical
propositions.
Related papers
- Deep Boosting Learning: A Brand-new Cooperative Approach for Image-Text Matching [53.05954114863596]
We propose a brand-new Deep Boosting Learning (DBL) algorithm for image-text matching.
An anchor branch is first trained to provide insights into the data properties.
A target branch is concurrently tasked with more adaptive margin constraints to further enlarge the relative distance between matched and unmatched samples.
arXiv Detail & Related papers (2024-04-28T08:44:28Z) - When Analytic Calculus Cracks AdaBoost Code [0.30693357740321775]
This study analyzes the (two classes) AdaBoost procedure implemented in scikit-learn.
AdaBoost is an algorithm in name only, as the resulting combination of weak classifiers can be explicitly calculated using a truth table.
We observe that this formula does not give the point of minimum of the risk, we provide a system to compute the exact point of minimum and we check that the AdaBoost procedure in scikit-learn does not implement the algorithm described by Freund and Schapire.
arXiv Detail & Related papers (2023-08-02T10:37:25Z) - Multiclass Boosting: Simple and Intuitive Weak Learning Criteria [72.71096438538254]
We give a simple and efficient boosting algorithm, that does not require realizability assumptions.
We present a new result on boosting for list learners, as well as provide a novel proof for the characterization of multiclass PAC learning.
arXiv Detail & Related papers (2023-07-02T19:26:58Z) - Local Boosting for Weakly-Supervised Learning [21.95003048165616]
Boosting is a technique to enhance the performance of a set of base models by combining them into a strong ensemble model.
In weakly supervised learning, where most of the data is labeled through weak and noisy sources, it remains nontrivial to design effective boosting approaches.
We propose $textitLocalBoost$, a novel framework for weakly-supervised boosting.
arXiv Detail & Related papers (2023-06-05T13:24:03Z) - AdaBoost is not an Optimal Weak to Strong Learner [11.003568749905359]
We show that the sample complexity of AdaBoost, and other classic variations thereof, are sub-optimal by at least one logarithmic factor in the desired accuracy of the strong learner.
arXiv Detail & Related papers (2023-01-27T07:37:51Z) - The Impossibility of Parallelizing Boosting [56.24620522825757]
We investigate the possibility of parallelizing boosting.
Our main contribution is a strong negative result, implying that significant parallelization of boosting requires an exponential blow-up in the total computing resources needed for training.
arXiv Detail & Related papers (2023-01-23T18:57:16Z) - ProBoost: a Boosting Method for Probabilistic Classifiers [55.970609838687864]
ProBoost is a new boosting algorithm for probabilistic classifiers.
It uses the uncertainty of each training sample to determine the most challenging/uncertain ones.
It produces a sequence that progressively focuses on the samples found to have the highest uncertainty.
arXiv Detail & Related papers (2022-09-04T12:49:20Z) - PRBoost: Prompt-Based Rule Discovery and Boosting for Interactive
Weakly-Supervised Learning [57.66155242473784]
Weakly-supervised learning (WSL) has shown promising results in addressing label scarcity on many NLP tasks.
Our proposed model, named PRBoost, achieves this goal via iterative prompt-based rule discovery and model boosting.
Experiments on four tasks show PRBoost outperforms state-of-the-art WSL baselines up to 7.1%.
arXiv Detail & Related papers (2022-03-18T04:23:20Z) - Quantum Boosting using Domain-Partitioning Hypotheses [0.9264464791978363]
Boosting is an ensemble learning method that converts a weak learner into a strong learner in the PAC learning framework.
We show that Q-RealBoost provides a speedup over Q-AdaBoost in terms of both the bias of the weak learner and the time taken by the weak learner to learn the target concept class.
arXiv Detail & Related papers (2021-10-25T10:46:13Z) - ADABOOK & MULTIBOOK: Adaptive Boosting with Chance Correction [3.7819322027528113]
It is possible for a weak learner to optimize Accuracy to the detriment of the more reaslistic chance-corrected measures, and when this happens the booster can give up too early.
This paper thus complements the theoretical work showing the necessity of using chance-corrected measures for evaluation, with empirical work showing how use of a chance-corrected measure can improve boosting.
arXiv Detail & Related papers (2020-10-11T01:17:32Z) - On the Dual Formulation of Boosting Algorithms [92.74617630106559]
We show that the Lagrange problems of AdaBoost, LogitBoost and soft-marginBoost are all dual problems with generalized hinge loss entropy.
By looking at the dual problems of these boosting algorithms, we show that the success of boosting can be understood in terms of maintaining a better margin distribution.
arXiv Detail & Related papers (2009-01-23T02:14:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.