Construction of Decision Trees and Acyclic Decision Graphs from Decision
Rule Systems
- URL: http://arxiv.org/abs/2305.01721v1
- Date: Tue, 2 May 2023 18:40:48 GMT
- Title: Construction of Decision Trees and Acyclic Decision Graphs from Decision
Rule Systems
- Authors: Kerven Durdymyradov and Mikhail Moshkov
- Abstract summary: We study the complexity of constructing decision trees and acyclic decision graphs representing decision trees from decision rule systems.
We discuss the possibility of not building the entire decision tree, but describing the computation path in this tree for the given input.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Decision trees and systems of decision rules are widely used as classifiers,
as a means for knowledge representation, and as algorithms. They are among the
most interpretable models for data analysis. The study of the relationships
between these two models can be seen as an important task of computer science.
Methods for transforming decision trees into systems of decision rules are
simple and well-known. In this paper, we consider the inverse transformation
problem, which is not trivial. We study the complexity of constructing decision
trees and acyclic decision graphs representing decision trees from decision
rule systems, and we discuss the possibility of not building the entire
decision tree, but describing the computation path in this tree for the given
input.
Related papers
- Learning accurate and interpretable decision trees [27.203303726977616]
We develop approaches to design decision tree learning algorithms given repeated access to data from the same domain.
We study the sample complexity of tuning prior parameters in Bayesian decision tree learning, and extend our results to decision tree regression.
We also study the interpretability of the learned decision trees and introduce a data-driven approach for optimizing the explainability versus accuracy trade-off using decision trees.
arXiv Detail & Related papers (2024-05-24T20:10:10Z) - Learning a Decision Tree Algorithm with Transformers [75.96920867382859]
We introduce MetaTree, a transformer-based model trained via meta-learning to directly produce strong decision trees.
We fit both greedy decision trees and globally optimized decision trees on a large number of datasets, and train MetaTree to produce only the trees that achieve strong generalization performance.
arXiv Detail & Related papers (2024-02-06T07:40:53Z) - Greedy Algorithm for Inference of Decision Trees from Decision Rule
Systems [0.0]
Decision trees and decision rule systems play important roles as attributes, knowledge representation tools, and algorithms.
In this paper, we consider the inverse transformation problem, which is not so simple.
Instead of constructing an entire decision tree, our study focuses on a greedy time algorithm that simulates the operation of a decision tree on a given attribute.
arXiv Detail & Related papers (2024-01-08T09:28:55Z) - TreeDQN: Learning to minimize Branch-and-Bound tree [78.52895577861327]
Branch-and-Bound is a convenient approach to solving optimization tasks in the form of Mixed Linear Programs.
The efficiency of the solver depends on the branchning used to select a variable for splitting.
We propose a reinforcement learning method that can efficiently learn the branching.
arXiv Detail & Related papers (2023-06-09T14:01:26Z) - SONG: Self-Organizing Neural Graphs [10.253870280561609]
Decision trees are easy to interpret since they are based on binary decisions, they can make decisions faster, and they provide a hierarchy of classes.
One of the well-known drawbacks of decision trees, as compared to decision graphs, is that decision trees cannot reuse the decision nodes.
In this paper, we provide a general paradigm based on Markov processes, which allows for efficient training of the special type of decision graphs, which we call Self-Organizing Neural Graphs (SONG)
arXiv Detail & Related papers (2021-07-28T07:53:53Z) - Decision Concept Lattice vs. Decision Trees and Random Forests [4.898744396854312]
We merge the ideas underlying decision trees, their ensembles and FCA by proposing a new supervised machine learning model.
Specifically, we first propose a state-time algorithm for constructing a part of the concept lattice that is based on a decision tree.
Second, we describe a prediction scheme based on a concept lattice for solving both classification and regression problems.
arXiv Detail & Related papers (2021-06-01T10:45:35Z) - Convex Polytope Trees [57.56078843831244]
convex polytope trees (CPT) are proposed to expand the family of decision trees by an interpretable generalization of their decision boundary.
We develop a greedy method to efficiently construct CPT and scalable end-to-end training algorithms for the tree parameters when the tree structure is given.
arXiv Detail & Related papers (2020-10-21T19:38:57Z) - Rectified Decision Trees: Exploring the Landscape of Interpretable and
Effective Machine Learning [66.01622034708319]
We propose a knowledge distillation based decision trees extension, dubbed rectified decision trees (ReDT)
We extend the splitting criteria and the ending condition of the standard decision trees, which allows training with soft labels.
We then train the ReDT based on the soft label distilled from a well-trained teacher model through a novel jackknife-based method.
arXiv Detail & Related papers (2020-08-21T10:45:25Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z) - Generalized and Scalable Optimal Sparse Decision Trees [56.35541305670828]
We present techniques that produce optimal decision trees over a variety of objectives.
We also introduce a scalable algorithm that produces provably optimal results in the presence of continuous variables.
arXiv Detail & Related papers (2020-06-15T19:00:11Z) - dtControl: Decision Tree Learning Algorithms for Controller
Representation [0.0]
Decision trees can be used to represent provably-correct controllers concisely.
We present dtControl, an easily synthesised tool for representing memoryless controllers as decision trees.
arXiv Detail & Related papers (2020-02-12T17:13:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.