dtControl: Decision Tree Learning Algorithms for Controller
Representation
- URL: http://arxiv.org/abs/2002.04991v1
- Date: Wed, 12 Feb 2020 17:13:17 GMT
- Title: dtControl: Decision Tree Learning Algorithms for Controller
Representation
- Authors: Pranav Ashok, Mathias Jackermeier, Pushpak Jagtap, Jan
K\v{r}et\'insk\'y, Maximilian Weininger, Majid Zamani
- Abstract summary: Decision trees can be used to represent provably-correct controllers concisely.
We present dtControl, an easily synthesised tool for representing memoryless controllers as decision trees.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decision tree learning is a popular classification technique most commonly
used in machine learning applications. Recent work has shown that decision
trees can be used to represent provably-correct controllers concisely. Compared
to representations using lookup tables or binary decision diagrams, decision
trees are smaller and more explainable. We present dtControl, an easily
extensible tool for representing memoryless controllers as decision trees. We
give a comprehensive evaluation of various decision tree learning algorithms
applied to 10 case studies arising out of correct-by-construction controller
synthesis. These algorithms include two new techniques, one for using arbitrary
linear binary classifiers in the decision tree learning, and one novel approach
for determinizing controllers during the decision tree construction. In
particular the latter turns out to be extremely efficient, yielding decision
trees with a single-digit number of decision nodes on 5 of the case studies.
Related papers
- Optimizing Interpretable Decision Tree Policies for Reinforcement Learning [10.68128849363198]
Decision trees have gained increased attention in supervised learning for their inherent interpretability.
This paper considers the problem of optimizing interpretable decision tree policies to replace neural networks in reinforcement learning settings.
arXiv Detail & Related papers (2024-08-21T14:04:00Z) - Learning accurate and interpretable decision trees [27.203303726977616]
We develop approaches to design decision tree learning algorithms given repeated access to data from the same domain.
We study the sample complexity of tuning prior parameters in Bayesian decision tree learning, and extend our results to decision tree regression.
We also study the interpretability of the learned decision trees and introduce a data-driven approach for optimizing the explainability versus accuracy trade-off using decision trees.
arXiv Detail & Related papers (2024-05-24T20:10:10Z) - Learning a Decision Tree Algorithm with Transformers [75.96920867382859]
We introduce MetaTree, a transformer-based model trained via meta-learning to directly produce strong decision trees.
We fit both greedy decision trees and globally optimized decision trees on a large number of datasets, and train MetaTree to produce only the trees that achieve strong generalization performance.
arXiv Detail & Related papers (2024-02-06T07:40:53Z) - Greedy Algorithm for Inference of Decision Trees from Decision Rule
Systems [0.0]
Decision trees and decision rule systems play important roles as attributes, knowledge representation tools, and algorithms.
In this paper, we consider the inverse transformation problem, which is not so simple.
Instead of constructing an entire decision tree, our study focuses on a greedy time algorithm that simulates the operation of a decision tree on a given attribute.
arXiv Detail & Related papers (2024-01-08T09:28:55Z) - TreeDQN: Learning to minimize Branch-and-Bound tree [78.52895577861327]
Branch-and-Bound is a convenient approach to solving optimization tasks in the form of Mixed Linear Programs.
The efficiency of the solver depends on the branchning used to select a variable for splitting.
We propose a reinforcement learning method that can efficiently learn the branching.
arXiv Detail & Related papers (2023-06-09T14:01:26Z) - Construction of Decision Trees and Acyclic Decision Graphs from Decision
Rule Systems [0.0]
We study the complexity of constructing decision trees and acyclic decision graphs representing decision trees from decision rule systems.
We discuss the possibility of not building the entire decision tree, but describing the computation path in this tree for the given input.
arXiv Detail & Related papers (2023-05-02T18:40:48Z) - R(Det)^2: Randomized Decision Routing for Object Detection [64.48369663018376]
We propose a novel approach to combine decision trees and deep neural networks in an end-to-end learning manner for object detection.
To facilitate effective learning, we propose randomized decision routing with node selective and associative losses.
We name this approach as the randomized decision routing for object detection, abbreviated as R(Det)$2$.
arXiv Detail & Related papers (2022-04-02T07:54:58Z) - SONG: Self-Organizing Neural Graphs [10.253870280561609]
Decision trees are easy to interpret since they are based on binary decisions, they can make decisions faster, and they provide a hierarchy of classes.
One of the well-known drawbacks of decision trees, as compared to decision graphs, is that decision trees cannot reuse the decision nodes.
In this paper, we provide a general paradigm based on Markov processes, which allows for efficient training of the special type of decision graphs, which we call Self-Organizing Neural Graphs (SONG)
arXiv Detail & Related papers (2021-07-28T07:53:53Z) - Rectified Decision Trees: Exploring the Landscape of Interpretable and
Effective Machine Learning [66.01622034708319]
We propose a knowledge distillation based decision trees extension, dubbed rectified decision trees (ReDT)
We extend the splitting criteria and the ending condition of the standard decision trees, which allows training with soft labels.
We then train the ReDT based on the soft label distilled from a well-trained teacher model through a novel jackknife-based method.
arXiv Detail & Related papers (2020-08-21T10:45:25Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z) - Generalized and Scalable Optimal Sparse Decision Trees [56.35541305670828]
We present techniques that produce optimal decision trees over a variety of objectives.
We also introduce a scalable algorithm that produces provably optimal results in the presence of continuous variables.
arXiv Detail & Related papers (2020-06-15T19:00:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.