Improved Quantum Query Upper Bounds Based on Classical Decision Trees
- URL: http://arxiv.org/abs/2203.02968v1
- Date: Sun, 6 Mar 2022 14:04:06 GMT
- Title: Improved Quantum Query Upper Bounds Based on Classical Decision Trees
- Authors: Arjan Cornelissen, Nikhil S. Mande, Subhasree Patro
- Abstract summary: Given a classical query algorithm as a decision tree, when does there exist a quantum query algorithm with a speed-up over the classical one?
We provide a general construction based on the structure of the underlying decision tree, and prove that this can give us an up-to-quadratic quantum speed-up.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Given a classical query algorithm as a decision tree, when does there exist a
quantum query algorithm with a speed-up over the classical one? We provide a
general construction based on the structure of the underlying decision tree,
and prove that this can give us an up-to-quadratic quantum speed-up. In
particular, we obtain a bounded-error quantum query algorithm of cost
$O(\sqrt{s})$ to compute a Boolean function (more generally, a relation) that
can be computed by a classical (even randomized) decision tree of size $s$.
Lin and Lin [ToC'16] and Beigi and Taghavi [Quantum'20] showed results of a
similar flavor, and gave upper bounds in terms of a quantity which we call the
"guessing complexity" of a decision tree. We identify that the guessing
complexity of a decision tree equals its rank, a notion introduced by
Ehrenfeucht and Haussler [Inf. Comp.'89] in the context of learning theory.
This answers a question posed by Lin and Lin, who asked whether the guessing
complexity of a decision tree is related to any complexity-theoretic measure.
We also show a polynomial separation between rank and randomized rank for the
complete binary AND-OR tree.
Beigi and Taghavi constructed span programs and dual adversary solutions for
Boolean functions given classical decision trees computing them and an
assignment of non-negative weights to its edges. We explore the effect of
changing these weights on the resulting span program complexity and objective
value of the dual adversary bound, and capture the best possible weighting
scheme by an optimization program. We exhibit a solution to this program and
argue its optimality from first principles. We also exhibit decision trees for
which our bounds are asymptotically stronger than those of Lin and Lin, and
Beigi and Taghavi. This answers a question of Beigi and Taghavi, who asked
whether different weighting schemes could yield better upper bounds.
Related papers
- Sum-of-Squares inspired Quantum Metaheuristic for Polynomial Optimization with the Hadamard Test and Approximate Amplitude Constraints [76.53316706600717]
Recently proposed quantum algorithm arXiv:2206.14999 is based on semidefinite programming (SDP)
We generalize the SDP-inspired quantum algorithm to sum-of-squares.
Our results show that our algorithm is suitable for large problems and approximate the best known classicals.
arXiv Detail & Related papers (2024-08-14T19:04:13Z) - Learning a Decision Tree Algorithm with Transformers [75.96920867382859]
We introduce MetaTree, a transformer-based model trained via meta-learning to directly produce strong decision trees.
We fit both greedy decision trees and globally optimized decision trees on a large number of datasets, and train MetaTree to produce only the trees that achieve strong generalization performance.
arXiv Detail & Related papers (2024-02-06T07:40:53Z) - Quantum-inspired attribute selection algorithm: A Fidelity-based Quantum
Decision Tree [1.6190746208019742]
A classical decision tree is completely based on splitting measures, which utilize the occurrence of random events in correspondence to its class labels.
We propose to use fidelity as a quantum splitting criterion to construct an efficient and balanced quantum decision tree.
arXiv Detail & Related papers (2023-10-27T16:29:42Z) - TreeDQN: Learning to minimize Branch-and-Bound tree [78.52895577861327]
Branch-and-Bound is a convenient approach to solving optimization tasks in the form of Mixed Linear Programs.
The efficiency of the solver depends on the branchning used to select a variable for splitting.
We propose a reinforcement learning method that can efficiently learn the branching.
arXiv Detail & Related papers (2023-06-09T14:01:26Z) - Efficient Quantum Agnostic Improper Learning of Decision Trees [0.18416014644193066]
We give a poly agnostic(n,t,frac1varepsilon)$ quantum algorithm for learning size $t$ decision trees with uniform marginal over instances.
Our algorithm is the first algorithm (classical or quantum) for learning decision trees in time without membership queries.
arXiv Detail & Related papers (2022-10-01T07:11:19Z) - How Smart Guessing Strategies Can Yield Massive Scalability Improvements
for Sparse Decision Tree Optimization [18.294573939199438]
Current algorithms often require impractical amounts of time and memory to find optimal or near-optimal trees for some real-world datasets.
We address this problem via smart guessing strategies that can be applied to any optimal branch-and-bound-based decision tree algorithm.
Our approach enables guesses about how to bin continuous features, the size of the tree, and lower bounds on the error for the optimal decision tree.
arXiv Detail & Related papers (2021-12-01T19:39:28Z) - Properly learning decision trees in almost polynomial time [25.763690981846125]
We give an $nO(loglog n)$-time membership query algorithm for properly and agnostically learning decision trees.
Our algorithm shares similarities with practicals for learning decision trees.
We show how every decision tree can be "pruned" so that every variable in the resulting tree is influential.
arXiv Detail & Related papers (2021-09-01T22:12:47Z) - Simple is better: Making Decision Trees faster using random sampling [4.284674689172996]
In recent years, gradient boosted decision trees have become popular in building robust machine learning models on big data.
We show theoretically and empirically that choosing the split points uniformly at random provides the same or even better performance in terms of accuracy and computational efficiency.
arXiv Detail & Related papers (2021-08-19T17:00:21Z) - Convex Polytope Trees [57.56078843831244]
convex polytope trees (CPT) are proposed to expand the family of decision trees by an interpretable generalization of their decision boundary.
We develop a greedy method to efficiently construct CPT and scalable end-to-end training algorithms for the tree parameters when the tree structure is given.
arXiv Detail & Related papers (2020-10-21T19:38:57Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z) - Generalized and Scalable Optimal Sparse Decision Trees [56.35541305670828]
We present techniques that produce optimal decision trees over a variety of objectives.
We also introduce a scalable algorithm that produces provably optimal results in the presence of continuous variables.
arXiv Detail & Related papers (2020-06-15T19:00:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.