Approximating splits for decision trees quickly in sparse data streams
- URL: http://arxiv.org/abs/2601.12525v1
- Date: Sun, 18 Jan 2026 18:19:28 GMT
- Title: Approximating splits for decision trees quickly in sparse data streams
- Authors: Nikolaj Tatti,
- Abstract summary: We focus on how to speed up the search for the optimal split when dealing with sparse binary features and a binary class.<n>In our experiments we find almost-optimal efficiently, faster than the baseline, overperforming the theoretical approximation guarantees.
- Score: 6.184770559759073
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decision trees are one of the most popular classifiers in the machine learning literature. While the most common decision tree learning algorithms treat data as a batch, numerous algorithms have been proposed to construct decision trees from a data stream. A standard training strategy involves augmenting the current tree by changing a leaf node into a split. Here we typically maintain counters in each leaf which allow us to determine the optimal split, and whether the split should be done. In this paper we focus on how to speed up the search for the optimal split when dealing with sparse binary features and a binary class. We focus on finding splits that have the approximately optimal information gain or Gini index. In both cases finding the optimal split can be done in $O(d)$ time, where $d$ is the number of features. We propose an algorithm that yields $(1 + α)$ approximation when using conditional entropy in amortized $O(α^{-1}(1 + m\log d) \log \log n)$ time, where $m$ is the number of 1s in a data point, and $n$ is the number of data points. Similarly, for Gini index, we achieve $(1 + α)$ approximation in amortized $O(α^{-1} + m \log d)$ time. Our approach is beneficial for sparse data where $m \ll d$. In our experiments we find almost-optimal splits efficiently, faster than the baseline, overperforming the theoretical approximation guarantees.
Related papers
- Log-Time K-Means Clustering for 1D Data: Novel Approaches with Proof and Implementation [0.0]
This thesis bridges theory and practice for 1D $k$-means clustering, delivering efficient and sound algorithms.<n> Benchmarks demonstrate over a 4500x speedup compared to scikit-learn for large datasets.
arXiv Detail & Related papers (2024-12-19T09:03:39Z) - Finite Sample Complexity Analysis of Binary Segmentation [2.474754293747645]
We describe new methods for analyzing the time and space complexity of binary segmentation for a given finite $N$, $K$, and minimum segment length parameter.
We provide an empirical analysis of real data which suggests that binary segmentation is often close to optimal speed in practice.
arXiv Detail & Related papers (2024-10-11T09:26:29Z) - Differentially Private Kernel Density Estimation [11.526850085349155]
We introduce a refined differentially private (DP) data structure for kernel density estimation (KDE)<n>We study the mathematical problem: given a similarity function $f$ and a private dataset $X subset mathbbRd$, our goal is to preprocess $X$ so that for any query $yinmathbbRd$, we approximate $sum_x in X f(x, y)$ in a differentially private fashion.
arXiv Detail & Related papers (2024-09-03T08:01:19Z) - A Scalable Algorithm for Individually Fair K-means Clustering [77.93955971520549]
We present a scalable algorithm for the individually fair ($p$, $k$)-clustering problem introduced by Jung et al. and Mahabadi et al.
A clustering is then called individually fair if it has centers within distance $delta(x)$ of $x$ for each $xin P$.
We show empirically that not only is our algorithm much faster than prior work, but it also produces lower-cost solutions.
arXiv Detail & Related papers (2024-02-09T19:01:48Z) - A Formal Perspective on Byte-Pair Encoding [100.75374173565548]
Byte-Pair imation (BPE) is a popular algorithm used for tokenizing data in NLP, despite being devised initially as a compression method.
We provide a faster implementation of BPE which improves the runtime complexity from $mathcalOleft(N Mright)$ to $mathcalOleft(N log Mright)$, where $N$ is the sequence length and $M$ is the merge count.
arXiv Detail & Related papers (2023-06-29T10:29:23Z) - Fully-Dynamic Decision Trees [3.0058005235097114]
We develop the first fully dynamic algorithm that maintains a decision tree over an arbitrary sequence of insertions and deletions of labeled examples.
For real-valued features the algorithm has an amortized running time per insertion/deletion of $Obig(fracd log3 nepsilon2big)$.
We show that any algorithm with similar guarantees uses amortized running time $Omega(d)$ and space $tildeOmega (n d)$.
arXiv Detail & Related papers (2022-12-01T18:58:19Z) - Communication-Efficient Adam-Type Algorithms for Distributed Data Mining [93.50424502011626]
We propose a class of novel distributed Adam-type algorithms (emphi.e., SketchedAMSGrad) utilizing sketching.
Our new algorithm achieves a fast convergence rate of $O(frac1sqrtnT + frac1(k/d)2 T)$ with the communication cost of $O(k log(d))$ at each iteration.
arXiv Detail & Related papers (2022-10-14T01:42:05Z) - Scalable Differentially Private Clustering via Hierarchically Separated
Trees [82.69664595378869]
We show that our method computes a solution with cost at most $O(d3/2log n)cdot OPT + O(k d2 log2 n / epsilon2)$, where $epsilon$ is the privacy guarantee.
Although the worst-case guarantee is worse than that of state of the art private clustering methods, the algorithm we propose is practical.
arXiv Detail & Related papers (2022-06-17T09:24:41Z) - TURF: A Two-factor, Universal, Robust, Fast Distribution Learning
Algorithm [64.13217062232874]
One of its most powerful and successful modalities approximates every distribution to an $ell$ distance essentially at most a constant times larger than its closest $t$-piece degree-$d_$.
We provide a method that estimates this number near-optimally, hence helps approach the best possible approximation.
arXiv Detail & Related papers (2022-02-15T03:49:28Z) - Systematically improving existing k-means initialization algorithms at
nearly no cost, by pairwise-nearest-neighbor smoothing [1.2570180539670577]
We present a meta-method for initializing the $k$-means clustering algorithm called PNN-smoothing.
It consists in splitting a given dataset into $J$ random subsets, clustering each of them individually, and merging the resulting clusterings with the pairwise-nearest-neighbor method.
arXiv Detail & Related papers (2022-02-08T15:56:30Z) - List-Decodable Mean Estimation in Nearly-PCA Time [50.79691056481693]
We study the fundamental task of list-decodable mean estimation in high dimensions.
Our algorithm runs in time $widetildeO(ndk)$ for all $k = O(sqrtd) cup Omega(d)$, where $n$ is the size of the dataset.
A variant of our algorithm has runtime $widetildeO(ndk)$ for all $k$, at the expense of an $O(sqrtlog k)$ factor in the recovery guarantee
arXiv Detail & Related papers (2020-11-19T17:21:37Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.