A Mathematical Programming Approach to Optimal Classification Forests
- URL: http://arxiv.org/abs/2211.10502v2
- Date: Mon, 24 Apr 2023 00:49:16 GMT
- Title: A Mathematical Programming Approach to Optimal Classification Forests
- Authors: V\'ictor Blanco, Alberto Jap\'on, Justo Puerto, Peter Zhang
- Abstract summary: We propose a novel mathematical optimization-based methodology in which a given number of trees are simultaneously constructed.
The classification rule is derived by assigning to each observation its most frequently predicted class among the trees in the forest.
We show that our proposed method has equal or superior performance compared with state-of-the-art tree-based classification methods.
- Score: 1.0705399532413618
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we introduce Optimal Classification Forests, a new family of
classifiers that takes advantage of an optimal ensemble of decision trees to
derive accurate and interpretable classifiers. We propose a novel mathematical
optimization-based methodology in which a given number of trees are
simultaneously constructed, each of them providing a predicted class for the
observations in the feature space. The classification rule is derived by
assigning to each observation its most frequently predicted class among the
trees in the forest. We provide a mixed integer linear programming formulation
for the problem. We report the results of our computational experiments, from
which we conclude that our proposed method has equal or superior performance
compared with state-of-the-art tree-based classification methods. More
importantly, it achieves high prediction accuracy with, for example, orders of
magnitude fewer trees than random forests. We also present three real-world
case studies showing that our methodology has very interesting implications in
terms of interpretability.
Related papers
- Learning a Decision Tree Algorithm with Transformers [75.96920867382859]
We introduce MetaTree, a transformer-based model trained via meta-learning to directly produce strong decision trees.
We fit both greedy decision trees and globally optimized decision trees on a large number of datasets, and train MetaTree to produce only the trees that achieve strong generalization performance.
arXiv Detail & Related papers (2024-02-06T07:40:53Z) - Why do Random Forests Work? Understanding Tree Ensembles as
Self-Regularizing Adaptive Smoothers [68.76846801719095]
We argue that the current high-level dichotomy into bias- and variance-reduction prevalent in statistics is insufficient to understand tree ensembles.
We show that forests can improve upon trees by three distinct mechanisms that are usually implicitly entangled.
arXiv Detail & Related papers (2024-02-02T15:36:43Z) - Unboxing Tree Ensembles for interpretability: a hierarchical
visualization tool and a multivariate optimal re-built tree [0.34530027457862006]
We develop an interpretable representation of a tree-ensemble model that can provide valuable insights into its behavior.
The proposed model is effective in yielding a shallow interpretable tree approxing the tree-ensemble decision function.
arXiv Detail & Related papers (2023-02-15T10:43:31Z) - Explaining random forest prediction through diverse rulesets [0.0]
Local Tree eXtractor (LTreeX) is able to explain the forest prediction for a given test instance with a few diverse rules.
We show that our proposed approach substantially outperforms other explainable methods in terms of predictive performance.
arXiv Detail & Related papers (2022-03-29T12:54:57Z) - Optimal randomized classification trees [0.0]
Classification and Regression Trees (CARTs) are off-the-shelf techniques in modern Statistics and Machine Learning.
CARTs are built by means of a greedy procedure, sequentially deciding the splitting predictor variable(s) and the associated threshold.
This greedy approach trains trees very fast, but, by its nature, their classification accuracy may not be competitive against other state-of-the-art procedures.
arXiv Detail & Related papers (2021-10-19T11:41:12Z) - Making CNNs Interpretable by Building Dynamic Sequential Decision
Forests with Top-down Hierarchy Learning [62.82046926149371]
We propose a generic model transfer scheme to make Convlutional Neural Networks (CNNs) interpretable.
We achieve this by building a differentiable decision forest on top of CNNs.
We name the transferred model deep Dynamic Sequential Decision Forest (dDSDF)
arXiv Detail & Related papers (2021-06-05T07:41:18Z) - Intersection Regularization for Extracting Semantic Attributes [72.53481390411173]
We consider the problem of supervised classification, such that the features that the network extracts match an unseen set of semantic attributes.
For example, when learning to classify images of birds into species, we would like to observe the emergence of features that zoologists use to classify birds.
We propose training a neural network with discrete top-level activations, which is followed by a multi-layered perceptron (MLP) and a parallel decision tree.
arXiv Detail & Related papers (2021-03-22T14:32:44Z) - Stochastic Optimization Forests [60.523606291705214]
We show how to train forest decision policies by growing trees that choose splits to directly optimize the downstream decision quality, rather than splitting to improve prediction accuracy as in the standard random forest algorithm.
We show that our approximate splitting criteria can reduce running time hundredfold, while achieving performance close to forest algorithms that exactly re-optimize for every candidate split.
arXiv Detail & Related papers (2020-08-17T16:56:06Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z) - Sparsity in Optimal Randomized Classification Trees [3.441021278275805]
We propose a continuous optimization approach to build sparse optimal classification trees, based on oblique cuts.
Both types of sparsity, namely local and global, are modeled by means of regularizations with polyhedral norms.
Unlike greedy approaches, our ability to easily trade in some of our classification accuracy for a gain in global sparsity is shown.
arXiv Detail & Related papers (2020-02-21T09:09:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.