Decision Concept Lattice vs. Decision Trees and Random Forests
- URL: http://arxiv.org/abs/2106.00387v1
- Date: Tue, 1 Jun 2021 10:45:35 GMT
- Title: Decision Concept Lattice vs. Decision Trees and Random Forests
- Authors: Egor Dudyrev, Sergei O. Kuznetsov
- Abstract summary: We merge the ideas underlying decision trees, their ensembles and FCA by proposing a new supervised machine learning model.
Specifically, we first propose a state-time algorithm for constructing a part of the concept lattice that is based on a decision tree.
Second, we describe a prediction scheme based on a concept lattice for solving both classification and regression problems.
- Score: 4.898744396854312
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decision trees and their ensembles are very popular models of supervised
machine learning. In this paper we merge the ideas underlying decision trees,
their ensembles and FCA by proposing a new supervised machine learning model
which can be constructed in polynomial time and is applicable for both
classification and regression problems. Specifically, we first propose a
polynomial-time algorithm for constructing a part of the concept lattice that
is based on a decision tree. Second, we describe a prediction scheme based on a
concept lattice for solving both classification and regression tasks with
prediction quality comparable to that of state-of-the-art models.
Related papers
- Decision Trees for Interpretable Clusters in Mixture Models and Deep Representations [5.65604054654671]
We introduce the notion of an explainability-to-noise ratio for mixture models.
We propose an algorithm that takes as input a mixture model and constructs a suitable tree in data-independent time.
We prove upper and lower bounds on the error rate of the resulting decision tree.
arXiv Detail & Related papers (2024-11-03T14:00:20Z) - Learning accurate and interpretable decision trees [27.203303726977616]
We develop approaches to design decision tree learning algorithms given repeated access to data from the same domain.
We study the sample complexity of tuning prior parameters in Bayesian decision tree learning, and extend our results to decision tree regression.
We also study the interpretability of the learned decision trees and introduce a data-driven approach for optimizing the explainability versus accuracy trade-off using decision trees.
arXiv Detail & Related papers (2024-05-24T20:10:10Z) - An Axiomatic Approach to Model-Agnostic Concept Explanations [67.84000759813435]
We propose an approach to concept explanations that satisfy three natural axioms: linearity, recursivity, and similarity.
We then establish connections with previous concept explanation methods, offering insight into their varying semantic meanings.
arXiv Detail & Related papers (2024-01-12T20:53:35Z) - Greedy Algorithm for Inference of Decision Trees from Decision Rule
Systems [0.0]
Decision trees and decision rule systems play important roles as attributes, knowledge representation tools, and algorithms.
In this paper, we consider the inverse transformation problem, which is not so simple.
Instead of constructing an entire decision tree, our study focuses on a greedy time algorithm that simulates the operation of a decision tree on a given attribute.
arXiv Detail & Related papers (2024-01-08T09:28:55Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Invariant Causal Set Covering Machines [64.86459157191346]
Rule-based models, such as decision trees, appeal to practitioners due to their interpretable nature.
However, the learning algorithms that produce such models are often vulnerable to spurious associations and thus, they are not guaranteed to extract causally-relevant insights.
We propose Invariant Causal Set Covering Machines, an extension of the classical Set Covering Machine algorithm for conjunctions/disjunctions of binary-valued rules that provably avoids spurious associations.
arXiv Detail & Related papers (2023-06-07T20:52:01Z) - Construction of Decision Trees and Acyclic Decision Graphs from Decision
Rule Systems [0.0]
We study the complexity of constructing decision trees and acyclic decision graphs representing decision trees from decision rule systems.
We discuss the possibility of not building the entire decision tree, but describing the computation path in this tree for the given input.
arXiv Detail & Related papers (2023-05-02T18:40:48Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Contextual Decision Trees [62.997667081978825]
We propose a multi-armed contextual bandit recommendation framework for feature-based selection of a single shallow tree of the learned ensemble.
The trained system, which works on top of the Random Forest, dynamically identifies a base predictor that is responsible for providing the final output.
arXiv Detail & Related papers (2022-07-13T17:05:08Z) - Decision Tree Learning with Spatial Modal Logics [0.0]
More-than-propositional symbolic learning methods have started to appear, in particular for time-dependent data.
We present a theory of spatial decision tree learning, and describe a prototypical implementation of a spatial decision tree learning algorithm.
We compare the predicting power of spatial decision trees with that of classical propositional decision trees in several versions, for a multi-class image classification problem.
arXiv Detail & Related papers (2021-09-17T02:35:18Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.