Decision Tree Embedding by Leaf-Means
- URL: http://arxiv.org/abs/2512.01819v1
- Date: Mon, 01 Dec 2025 15:57:33 GMT
- Title: Decision Tree Embedding by Leaf-Means
- Authors: Cencheng Shen, Yuexiao Dong, Carey E. Priebe,
- Abstract summary: Decision Tree Embedding (DTE) is a fast and effective method that leverages the leaf partitions of a trained classification tree to construct an interpretable feature representation.<n>By using the sample means within each leaf region as anchor points, DTE maps inputs into an embedding space defined by the tree's partition structure.<n>We establish several population-level theoretical properties of DTE, including its preservation of conditional density under mild conditions.
- Score: 11.318593165494724
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decision trees and random forest remain highly competitive for classification on medium-sized, standard datasets due to their robustness, minimal preprocessing requirements, and interpretability. However, a single tree suffers from high estimation variance, while large ensembles reduce this variance at the cost of substantial computational overhead and diminished interpretability. In this paper, we propose Decision Tree Embedding (DTE), a fast and effective method that leverages the leaf partitions of a trained classification tree to construct an interpretable feature representation. By using the sample means within each leaf region as anchor points, DTE maps inputs into an embedding space defined by the tree's partition structure, effectively circumventing the high variance inherent in decision-tree splitting rules. We further introduce an ensemble extension based on additional bootstrap trees, and pair the resulting embedding with linear discriminant analysis for classification. We establish several population-level theoretical properties of DTE, including its preservation of conditional density under mild conditions and a characterization of the resulting classification error. Empirical studies on synthetic and real datasets demonstrate that DTE strikes a strong balance between accuracy and computational efficiency, outperforming or matching random forest and shallow neural networks while requiring only a fraction of their training time in most cases. Overall, the proposed DTE method can be viewed either as a scalable decision tree classifier that improves upon standard split rules, or as a neural network model whose weights are learned from tree-derived anchor points, achieving an intriguing integration of both paradigms.
Related papers
- Learning Order Forest for Qualitative-Attribute Data Clustering [52.612779710298526]
This paper discovers a tree-like distance structure to flexibly represent the local order relationship among intra-attribute qualitative values.<n>A joint learning mechanism is proposed to iteratively obtain more appropriate tree structures and clusters.<n>Experiments demonstrate that the joint learning adapts the forest to the clustering task to yield accurate results.
arXiv Detail & Related papers (2026-03-03T07:49:50Z) - Partition Trees: Conditional Density Estimation over General Outcome Spaces [46.1988967916659]
We propose Partition Trees, a tree-based framework for conditional density estimation over general outcome spaces.<n>Our approach models conditional distributions as piecewise-constant densities on data adaptive partitions and learns trees by directly minimizing conditional negative log-likelihood.
arXiv Detail & Related papers (2026-02-03T22:12:30Z) - ZTree: A Subgroup Identification Based Decision Tree Learning Framework [3.119681354260829]
We propose ZTree, a novel decision tree learning framework.<n>It replaces CART's traditional purity based splitting with statistically principled subgroup identification.<n>ZTree consistently delivers strong performance, especially at low data regimes.
arXiv Detail & Related papers (2025-09-16T05:25:16Z) - TreePO: Bridging the Gap of Policy Optimization and Efficacy and Inference Efficiency with Heuristic Tree-based Modeling [65.46347858249295]
TreePO is a self-guided rollout algorithm that views sequence generation as a tree-structured searching process.<n>TreePO essentially reduces the per-update compute burden while preserving or enhancing exploration diversity.
arXiv Detail & Related papers (2025-08-24T16:52:37Z) - Hierarchical Quantized Diffusion Based Tree Generation Method for Hierarchical Representation and Lineage Analysis [49.00783841494125]
HDTree captures tree relationships within a hierarchical latent space using a unified hierarchical codebook and quantized diffusion processes.<n> HDTree's effectiveness is demonstrated through comparisons on both general-purpose and single-cell datasets.<n>These contributions provide a new tool for hierarchical lineage analysis, enabling more accurate and efficient modeling of cellular differentiation paths.
arXiv Detail & Related papers (2025-06-29T15:19:13Z) - Learning Decision Trees as Amortized Structure Inference [59.65621207449269]
We propose a hybrid amortized structure inference approach to learn predictive decision tree ensembles given data.<n>We show that our approach, DT-GFN, outperforms state-of-the-art decision tree and deep learning methods on standard classification benchmarks.
arXiv Detail & Related papers (2025-03-10T07:05:07Z) - Explainable Clustering Beyond Worst-Case Guarantees [5.65604054654671]
We study the explainable clustering problem first posed by Moshkovitz, Dasgupta, Rashtchian, and Frost (ICML 2020)<n>The goal of explainable clustering is to fit an axis-aligned decision tree with $K$ leaves and minimal clustering cost (where every leaf is a cluster)
arXiv Detail & Related papers (2024-11-03T14:00:20Z) - Adapting tree-based multiple imputation methods for multi-level data? A simulation study [0.0]
Tree-based imputation methods implicitly assume independence between observations.<n>Alternative tree-based approaches have shown promise for individual-level data, but remain largely unexplored for hierarchical contexts.
arXiv Detail & Related papers (2024-01-25T13:12:50Z) - Regularized impurity reduction: Accurate decision trees with complexity
guarantees [20.170305081348328]
We propose a tree-induction algorithm that gives a logarithmic approximation guarantee on the tree complexity.
The enhanced algorithms strike an excellent balance between predictive accuracy and tree complexity.
arXiv Detail & Related papers (2022-08-23T13:15:19Z) - Making CNNs Interpretable by Building Dynamic Sequential Decision
Forests with Top-down Hierarchy Learning [62.82046926149371]
We propose a generic model transfer scheme to make Convlutional Neural Networks (CNNs) interpretable.
We achieve this by building a differentiable decision forest on top of CNNs.
We name the transferred model deep Dynamic Sequential Decision Forest (dDSDF)
arXiv Detail & Related papers (2021-06-05T07:41:18Z) - Convex Polytope Trees [57.56078843831244]
convex polytope trees (CPT) are proposed to expand the family of decision trees by an interpretable generalization of their decision boundary.
We develop a greedy method to efficiently construct CPT and scalable end-to-end training algorithms for the tree parameters when the tree structure is given.
arXiv Detail & Related papers (2020-10-21T19:38:57Z) - Rectified Decision Trees: Exploring the Landscape of Interpretable and
Effective Machine Learning [66.01622034708319]
We propose a knowledge distillation based decision trees extension, dubbed rectified decision trees (ReDT)
We extend the splitting criteria and the ending condition of the standard decision trees, which allows training with soft labels.
We then train the ReDT based on the soft label distilled from a well-trained teacher model through a novel jackknife-based method.
arXiv Detail & Related papers (2020-08-21T10:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.