BooleanOCT: Optimal Classification Trees based on multivariate Boolean
Rules
- URL: http://arxiv.org/abs/2401.16133v1
- Date: Mon, 29 Jan 2024 12:58:44 GMT
- Title: BooleanOCT: Optimal Classification Trees based on multivariate Boolean
Rules
- Authors: Jiancheng Tu, Wenqi Fan and Zhibin Wu
- Abstract summary: We introduce a new mixed-integer programming (MIP) formulation to derive the optimal classification tree.
Our methodology integrates both linear metrics, including accuracy, balanced accuracy, and cost-sensitive cost, as well as nonlinear metrics such as the F1-score.
The proposed models demonstrate practical solvability on real-world datasets, effectively handling sizes in the tens of thousands.
- Score: 14.788278997556606
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The global optimization of classification trees has demonstrated considerable
promise, notably in enhancing accuracy, optimizing size, and thereby improving
human comprehensibility. While existing optimal classification trees
substantially enhance accuracy over greedy-based tree models like CART, they
still fall short when compared to the more complex black-box models, such as
random forests. To bridge this gap, we introduce a new mixed-integer
programming (MIP) formulation, grounded in multivariate Boolean rules, to
derive the optimal classification tree. Our methodology integrates both linear
metrics, including accuracy, balanced accuracy, and cost-sensitive cost, as
well as nonlinear metrics such as the F1-score. The approach is implemented in
an open-source Python package named BooleanOCT. We comprehensively benchmark
these methods on the 36 datasets from the UCI machine learning repository. The
proposed models demonstrate practical solvability on real-world datasets,
effectively handling sizes in the tens of thousands. Aiming to maximize
accuracy, this model achieves an average absolute improvement of 3.1\% and
1.5\% over random forests in small-scale and medium-sized datasets,
respectively. Experiments targeting various objectives, including balanced
accuracy, cost-sensitive cost, and F1-score, demonstrate the framework's wide
applicability and its superiority over contemporary state-of-the-art optimal
classification tree methods in small to medium-scale datasets.
Related papers
- Decoding-Time Language Model Alignment with Multiple Objectives [116.42095026960598]
Existing methods primarily focus on optimizing LMs for a single reward function, limiting their adaptability to varied objectives.
Here, we propose $textbfmulti-objective decoding (MOD)$, a decoding-time algorithm that outputs the next token from a linear combination of predictions.
We show why existing approaches can be sub-optimal even in natural settings and obtain optimality guarantees for our method.
arXiv Detail & Related papers (2024-06-27T02:46:30Z) - Minimally Supervised Learning using Topological Projections in
Self-Organizing Maps [55.31182147885694]
We introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs)
Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU)
Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques.
arXiv Detail & Related papers (2024-01-12T22:51:48Z) - An improved column-generation-based matheuristic for learning
classification trees [9.07661731728456]
Decision trees are highly interpretable models for solving classification problems in machine learning (ML)
Standard ML algorithms for training decision trees are fast but generate suboptimal trees in terms of accuracy.
citefirat 2020column proposed a column-generation-based approach for learning decision trees.
arXiv Detail & Related papers (2023-08-22T14:43:36Z) - Unboxing Tree Ensembles for interpretability: a hierarchical
visualization tool and a multivariate optimal re-built tree [0.34530027457862006]
We develop an interpretable representation of a tree-ensemble model that can provide valuable insights into its behavior.
The proposed model is effective in yielding a shallow interpretable tree approxing the tree-ensemble decision function.
arXiv Detail & Related papers (2023-02-15T10:43:31Z) - bsnsing: A decision tree induction method based on recursive optimal
boolean rule composition [2.28438857884398]
This paper proposes a new mixed-integer programming (MIP) formulation to optimize split rule selection in the decision tree induction process.
It develops an efficient search solver that is able to solve practical instances faster than commercial solvers.
arXiv Detail & Related papers (2022-05-30T17:13:57Z) - On multivariate randomized classification trees: $l_0$-based sparsity,
VC~dimension and decomposition methods [0.9346127431927981]
We investigate the nonlinear continuous optimization formulation proposed in Blanquero et al.
We first consider alternative methods to sparsify such trees based on concave approximations of the $l_0$ norm"
We propose a general decomposition scheme and an efficient version of it. Experiments on larger datasets show that the proposed decomposition method is able to significantly reduce the training times without compromising the accuracy.
arXiv Detail & Related papers (2021-12-09T22:49:08Z) - Evaluating State-of-the-Art Classification Models Against Bayes
Optimality [106.50867011164584]
We show that we can compute the exact Bayes error of generative models learned using normalizing flows.
We use our approach to conduct a thorough investigation of state-of-the-art classification models.
arXiv Detail & Related papers (2021-06-07T06:21:20Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Optimal Decision Trees for Nonlinear Metrics [42.18286681448184]
We show a novel algorithm for producing optimal trees for nonlinear metrics.
To the best of our knowledge, this is the first method to compute provably optimal decision trees for nonlinear metrics.
Our approach leads to a trade-off when compared to optimising linear metrics.
arXiv Detail & Related papers (2020-09-15T08:30:56Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z) - ENTMOOT: A Framework for Optimization over Ensemble Tree Models [57.98561336670884]
ENTMOOT is a framework for integrating tree models into larger optimization problems.
We show how ENTMOOT allows a simple integration of tree models into decision-making and black-box optimization.
arXiv Detail & Related papers (2020-03-10T14:34:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.