Quantum Boosting using Domain-Partitioning Hypotheses
- URL: http://arxiv.org/abs/2110.12793v1
- Date: Mon, 25 Oct 2021 10:46:13 GMT
- Title: Quantum Boosting using Domain-Partitioning Hypotheses
- Authors: Debajyoti Bera, Sagnik Chatterjee
- Abstract summary: Boosting is an ensemble learning method that converts a weak learner into a strong learner in the PAC learning framework.
We show that Q-RealBoost provides a speedup over Q-AdaBoost in terms of both the bias of the weak learner and the time taken by the weak learner to learn the target concept class.
- Score: 0.9264464791978363
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Boosting is an ensemble learning method that converts a weak learner into a
strong learner in the PAC learning framework. Freund and Schapire gave the
first classical boosting algorithm for binary hypothesis known as AdaBoost, and
this was recently adapted into a quantum boosting algorithm by Arunachalam et
al. Their quantum boosting algorithm (which we refer to as Q-AdaBoost) is
quadratically faster than the classical version in terms of the VC-dimension of
the hypothesis class of the weak learner but polynomially worse in the bias of
the weak learner.
In this work we design a different quantum boosting algorithm that uses
domain partitioning hypotheses that are significantly more flexible than those
used in prior quantum boosting algorithms in terms of margin calculations. Our
algorithm Q-RealBoost is inspired by the "Real AdaBoost" (aka. RealBoost)
extension to the original AdaBoost algorithm. Further, we show that Q-RealBoost
provides a polynomial speedup over Q-AdaBoost in terms of both the bias of the
weak learner and the time taken by the weak learner to learn the target concept
class.
Related papers
- How to Boost Any Loss Function [63.573324901948716]
We show that loss functions can be efficiently optimized with boosting.
We show that boosting can achieve a feat not yet known to be possible in the classical $0th$ order setting.
arXiv Detail & Related papers (2024-07-02T14:08:23Z) - Supervised Learning Guarantee for Quantum AdaBoost [7.473180902609473]
In the noisy intermediate-scale quantum (NISQ) era, variational quantum algorithms are constrained due to a limited number of qubits and the shallow depth of quantum circuits.
In this paper, we theoretically establish and numerically verify a learning guarantee for quantum adaptive boosting (AdaBoost)
Our work indicates that in the current NISQ era, introducing appropriate ensemble methods is particularly valuable in improving the performance of quantum machine learning algorithms.
arXiv Detail & Related papers (2024-02-04T07:18:44Z) - Multiclass Boosting: Simple and Intuitive Weak Learning Criteria [72.71096438538254]
We give a simple and efficient boosting algorithm, that does not require realizability assumptions.
We present a new result on boosting for list learners, as well as provide a novel proof for the characterization of multiclass PAC learning.
arXiv Detail & Related papers (2023-07-02T19:26:58Z) - PromptBoosting: Black-Box Text Classification with Ten Forward Passes [61.38341243907045]
We describe PromptBoosting, a query-efficient procedure for building a text classifier from a neural language model (LM) without access to the LM's parameters, gradients, or hidden representations.
Experiments show that PromptBoosting achieves state-of-the-art performance in multiple black-box few-shot classification tasks, and matches or outperforms full fine-tuning in both few-shot and standard learning paradigms, while training 10x faster than existing black-box methods.
arXiv Detail & Related papers (2022-12-19T06:04:54Z) - Online Agnostic Multiclass Boosting [20.22409095000365]
We give the first boosting algorithm for online agnostic mutliclass classification.
Our reduction also enables the construction of algorithms for statistical agnostic, online realizable, and statistical realizable multiclass boosting.
arXiv Detail & Related papers (2022-05-30T13:59:55Z) - Boosting for Online Convex Optimization [64.15578413206715]
We consider the decision-making framework of online convex optimization with a large number of experts.
We define a weak learning algorithm as a mechanism that guarantees approximate regret against a base class of experts.
We give an efficient boosting algorithm that guarantees near-optimal regret against the convex hull of the base class.
arXiv Detail & Related papers (2021-02-18T12:30:49Z) - MP-Boost: Minipatch Boosting via Adaptive Feature and Observation
Sampling [0.0]
MP-Boost is an algorithm loosely based on AdaBoost that learns by adaptively selecting small subsets of instances and features.
We empirically demonstrate the interpretability, comparative accuracy, and computational time of our approach on a variety of binary classification tasks.
arXiv Detail & Related papers (2020-11-14T04:26:13Z) - Improved Quantum Boosting [1.0660480034605238]
Arunachalam and Maity gave the first quantum improvement for boosting, by combining Freund and Schapire's AdaBoost algorithm with a quantum algorithm for approximate counting.
In this paper we give a substantially faster and simpler quantum boosting algorithm, based on Servedio's SmoothBoost algorithm.
arXiv Detail & Related papers (2020-09-17T15:16:42Z) - Online Agnostic Boosting via Regret Minimization [47.19178618537368]
Boosting is a widely used machine learning approach based on the idea of aggregating weak learning rules.
We provide the first online boosting algorithm; that is, given a weak learner with only marginally-better-than-trivial regret guarantees, our algorithm boosts it to a strong learner with sublinear regret.
arXiv Detail & Related papers (2020-03-02T19:21:25Z) - On the Dual Formulation of Boosting Algorithms [92.74617630106559]
We show that the Lagrange problems of AdaBoost, LogitBoost and soft-marginBoost are all dual problems with generalized hinge loss entropy.
By looking at the dual problems of these boosting algorithms, we show that the success of boosting can be understood in terms of maintaining a better margin distribution.
arXiv Detail & Related papers (2009-01-23T02:14:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.