Convex Polytope Trees
- URL: http://arxiv.org/abs/2010.11266v1
- Date: Wed, 21 Oct 2020 19:38:57 GMT
- Title: Convex Polytope Trees
- Authors: Mohammadreza Armandpour, Mingyuan Zhou
- Abstract summary: convex polytope trees (CPT) are proposed to expand the family of decision trees by an interpretable generalization of their decision boundary.
We develop a greedy method to efficiently construct CPT and scalable end-to-end training algorithms for the tree parameters when the tree structure is given.
- Score: 57.56078843831244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A decision tree is commonly restricted to use a single hyperplane to split
the covariate space at each of its internal nodes. It often requires a large
number of nodes to achieve high accuracy, hurting its interpretability. In this
paper, we propose convex polytope trees (CPT) to expand the family of decision
trees by an interpretable generalization of their decision boundary. The
splitting function at each node of CPT is based on the logical disjunction of a
community of differently weighted probabilistic linear decision-makers, which
also geometrically corresponds to a convex polytope in the covariate space. We
use a nonparametric Bayesian prior at each node to infer the community's size,
encouraging simpler decision boundaries by shrinking the number of polytope
facets. We develop a greedy method to efficiently construct CPT and scalable
end-to-end training algorithms for the tree parameters when the tree structure
is given. We empirically demonstrate the efficiency of CPT over existing
state-of-the-art decision trees in several real-world classification and
regression tasks from diverse domains.
Related papers
- Divide, Conquer, Combine Bayesian Decision Tree Sampling [1.1879716317856945]
Decision trees are commonly used predictive models due to their flexibility and interpretability.
This paper is directed at quantifying the uncertainty of decision tree predictions by employing a Bayesian inference approach.
arXiv Detail & Related papers (2024-03-26T23:14:15Z) - Learning a Decision Tree Algorithm with Transformers [75.96920867382859]
We introduce MetaTree, a transformer-based model trained via meta-learning to directly produce strong decision trees.
We fit both greedy decision trees and globally optimized decision trees on a large number of datasets, and train MetaTree to produce only the trees that achieve strong generalization performance.
arXiv Detail & Related papers (2024-02-06T07:40:53Z) - Hierarchical Shrinkage: improving the accuracy and interpretability of
tree-based methods [10.289846887751079]
We introduce Hierarchical Shrinkage (HS), a post-hoc algorithm that does not modify the tree structure.
HS substantially increases the predictive performance of decision trees, even when used in conjunction with other regularization techniques.
All code and models are released in a full-fledged package available on Github.
arXiv Detail & Related papers (2022-02-02T02:43:23Z) - Spectral Top-Down Recovery of Latent Tree Models [13.681975313065477]
Spectral Top-Down Recovery (STDR) is a divide-and-conquer approach for inference of large latent tree models.
STDR's partitioning step is non-random. Instead, it is based on the Fiedler vector of a suitable Laplacian matrix related to the observed nodes.
We prove that STDR is statistically consistent, and bound the number of samples required to accurately recover the tree with high probability.
arXiv Detail & Related papers (2021-02-26T02:47:42Z) - Growing Deep Forests Efficiently with Soft Routing and Learned
Connectivity [79.83903179393164]
This paper further extends the deep forest idea in several important aspects.
We employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., soft routing, rather than hard binary decisions.
Experiments on the MNIST dataset demonstrate that our empowered deep forests can achieve better or comparable performance than [1],[3].
arXiv Detail & Related papers (2020-12-29T18:05:05Z) - Rethinking Learnable Tree Filter for Generic Feature Transform [71.77463476808585]
Learnable Tree Filter presents a remarkable approach to model structure-preserving relations for semantic segmentation.
To relax the geometric constraint, we give the analysis by reformulating it as a Markov Random Field and introduce a learnable unary term.
For semantic segmentation, we achieve leading performance (82.1% mIoU) on the Cityscapes benchmark without bells-and-whistles.
arXiv Detail & Related papers (2020-12-07T07:16:47Z) - A Flexible Pipeline for the Optimization of CSG Trees [3.622365857213782]
CSG trees are an intuitive, yet powerful technique for the representation of geometry using a combination of Boolean set-operations and geometric primitives.
We present a systematic comparison of newly developed and existing tree optimization methods and propose a flexible processing pipeline with a focus on tree editability.
arXiv Detail & Related papers (2020-08-09T06:45:10Z) - The Max-Cut Decision Tree: Improving on the Accuracy and Running Time of
Decision Trees [0.0]
The Max-Cut decision tree involves novel modifications to a standard, baseline model of classification decision tree construction, precisely CART Gini.
Our experiments show that this node-based localized PCA can dramatically improve classification, while also significantly decreasing computational time compared to the baseline decision tree.
Results are most significant when evaluated on data sets with higher dimensions, or more classes; which, for the example data set CIFAR-100, enable a 49% improvement in accuracy while reducing CPU time by 94%.
arXiv Detail & Related papers (2020-06-25T00:47:21Z) - Generalized and Scalable Optimal Sparse Decision Trees [56.35541305670828]
We present techniques that produce optimal decision trees over a variety of objectives.
We also introduce a scalable algorithm that produces provably optimal results in the presence of continuous variables.
arXiv Detail & Related papers (2020-06-15T19:00:11Z) - Polynomial-Time Exact MAP Inference on Discrete Models with Global
Dependencies [83.05591911173332]
junction tree algorithm is the most general solution for exact MAP inference with run-time guarantees.
We propose a new graph transformation technique via node cloning which ensures a run-time for solving our target problem independently of the form of a corresponding clique tree.
arXiv Detail & Related papers (2019-12-27T13:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.