Universal guarantees for decision tree induction via a higher-order
splitting criterion
- URL: http://arxiv.org/abs/2010.08633v1
- Date: Fri, 16 Oct 2020 21:20:45 GMT
- Title: Universal guarantees for decision tree induction via a higher-order
splitting criterion
- Authors: Guy Blanc, Neha Gupta, Jane Lange, Li-Yang Tan
- Abstract summary: Our algorithm achieves provable guarantees for all target functions $f: -1,1n to -1,1$ with respect to the uniform distribution.
The crux of our extension is a new splitting criterion that takes into account the correlations between $f$ and small subsets of its attributes.
Our algorithm satisfies the following guarantee: for all target functions $f : -1,1n to -1,1$, sizes $sin mathbbN$, and error parameters $epsilon$, it constructs a decision
- Score: 16.832966312395126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a simple extension of top-down decision tree learning heuristics
such as ID3, C4.5, and CART. Our algorithm achieves provable guarantees for all
target functions $f: \{-1,1\}^n \to \{-1,1\}$ with respect to the uniform
distribution, circumventing impossibility results showing that existing
heuristics fare poorly even for simple target functions. The crux of our
extension is a new splitting criterion that takes into account the correlations
between $f$ and small subsets of its attributes. The splitting criteria of
existing heuristics (e.g. Gini impurity and information gain), in contrast, are
based solely on the correlations between $f$ and its individual attributes.
Our algorithm satisfies the following guarantee: for all target functions $f
: \{-1,1\}^n \to \{-1,1\}$, sizes $s\in \mathbb{N}$, and error parameters
$\epsilon$, it constructs a decision tree of size $s^{\tilde{O}((\log
s)^2/\epsilon^2)}$ that achieves error $\le O(\mathsf{opt}_s) + \epsilon$,
where $\mathsf{opt}_s$ denotes the error of the optimal size $s$ decision tree.
A key technical notion that drives our analysis is the noise stability of $f$,
a well-studied smoothness measure.
Related papers
- Monge-Kantorovich Fitting With Sobolev Budgets [6.748324975906262]
We quantify the performance of the approximation with the Monge-Kantorovich $p$-cost.
We may then reform the problem as minimizing a functional $mathscrJ_p(f)$ under a constraint on the Sobolev budget.
arXiv Detail & Related papers (2024-09-25T01:30:16Z) - Differentially Private Kernel Density Estimation [11.526850085349155]
We introduce a refined differentially private (DP) data structure for kernel density estimation (KDE)
We study the mathematical problem: given a similarity function $f$ and a private dataset $X subset mathbbRd$, our goal is to preprocess $X$ so that for any query $yinmathbbRd$, we approximate $sum_x in X f(x, y)$ in a differentially private fashion.
arXiv Detail & Related papers (2024-09-03T08:01:19Z) - Adaptive approximation of monotone functions [0.0]
We prove that GreedyBox achieves an optimal sample complexity for any function $f$, up to logarithmic factors.
Perhaps as expected, the $Lp(mu)$ error of GreedyBox decreases much faster for piecewise-$C2$ functions than predicted by the algorithm.
arXiv Detail & Related papers (2023-09-14T08:56:31Z) - Fully-Dynamic Approximate Decision Trees With Worst-Case Update Time
Guarantees [3.5509551353363644]
We give the first algorithm that maintains an approximate decision tree over an arbitrary sequence of insertions and deletions of labeled examples.
We provide a deterministic algorithm that maintains an $epsilon$-approximate tree using $O!left(fracd, f(n)n operatornamepolyfrachepsilonright)$ operations per update.
arXiv Detail & Related papers (2023-02-08T11:02:58Z) - Submodular + Concave [53.208470310734825]
It has been well established that first order optimization methods can converge to the maximal objective value of concave functions.
In this work, we initiate the determinant of the smooth functions convex body $$F(x) = G(x) +C(x)$.
This class of functions is an extension of both concave and continuous DR-submodular functions for which no guarantee is known.
arXiv Detail & Related papers (2021-06-09T01:59:55Z) - An Optimal Separation of Randomized and Quantum Query Complexity [67.19751155411075]
We prove that for every decision tree, the absolute values of the Fourier coefficients of a given order $ellsqrtbinomdell (1+log n)ell-1,$ sum to at most $cellsqrtbinomdell (1+log n)ell-1,$ where $n$ is the number of variables, $d$ is the tree depth, and $c>0$ is an absolute constant.
arXiv Detail & Related papers (2020-08-24T06:50:57Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z) - Provable guarantees for decision tree induction: the agnostic setting [16.784355746717562]
We give strengthened provable guarantees on the performance of widely employed and empirically successful sl top-down decision tree learnings.
We show that for all monotone functions$f$ and parameters $sin mathN$, theses construct a decision tree of size $stildeO((log s)/varepsilon2)$ that achieves error.
We complement our algorithmic guarantee with a near-matching $stildeOmega(log s)$ lower bound.
arXiv Detail & Related papers (2020-06-01T06:44:07Z) - Maximizing Determinants under Matroid Constraints [69.25768526213689]
We study the problem of finding a basis $S$ of $M$ such that $det(sum_i in Sv_i v_i v_itop)$ is maximized.
This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning.
arXiv Detail & Related papers (2020-04-16T19:16:38Z) - Agnostic Q-learning with Function Approximation in Deterministic
Systems: Tight Bounds on Approximation Error and Sample Complexity [94.37110094442136]
We study the problem of agnostic $Q$-learning with function approximation in deterministic systems.
We show that if $delta = Oleft(rho/sqrtdim_Eright)$, then one can find the optimal policy using $Oleft(dim_Eright)$.
arXiv Detail & Related papers (2020-02-17T18:41:49Z) - On the Complexity of Minimizing Convex Finite Sums Without Using the
Indices of the Individual Functions [62.01594253618911]
We exploit the finite noise structure of finite sums to derive a matching $O(n2)$-upper bound under the global oracle model.
Following a similar approach, we propose a novel adaptation of SVRG which is both emphcompatible with oracles, and achieves complexity bounds of $tildeO(n2+nsqrtL/mu)log (1/epsilon)$ and $O(nsqrtL/epsilon)$, for $mu>0$ and $mu=0$
arXiv Detail & Related papers (2020-02-09T03:39:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.