SONG: Self-Organizing Neural Graphs
- URL: http://arxiv.org/abs/2107.13214v1
- Date: Wed, 28 Jul 2021 07:53:53 GMT
- Title: SONG: Self-Organizing Neural Graphs
- Authors: {\L}ukasz Struski, Tomasz Danel, Marek \'Smieja, Jacek Tabor, Bartosz
Zieli\'nski
- Abstract summary: Decision trees are easy to interpret since they are based on binary decisions, they can make decisions faster, and they provide a hierarchy of classes.
One of the well-known drawbacks of decision trees, as compared to decision graphs, is that decision trees cannot reuse the decision nodes.
In this paper, we provide a general paradigm based on Markov processes, which allows for efficient training of the special type of decision graphs, which we call Self-Organizing Neural Graphs (SONG)
- Score: 10.253870280561609
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have seen a surge in research on deep interpretable neural
networks with decision trees as one of the most commonly incorporated tools.
There are at least three advantages of using decision trees over logistic
regression classification models: they are easy to interpret since they are
based on binary decisions, they can make decisions faster, and they provide a
hierarchy of classes. However, one of the well-known drawbacks of decision
trees, as compared to decision graphs, is that decision trees cannot reuse the
decision nodes. Nevertheless, decision graphs were not commonly used in deep
learning due to the lack of efficient gradient-based training techniques. In
this paper, we fill this gap and provide a general paradigm based on Markov
processes, which allows for efficient training of the special type of decision
graphs, which we call Self-Organizing Neural Graphs (SONG). We provide an
extensive theoretical study of SONG, complemented by experiments conducted on
Letter, Connect4, MNIST, CIFAR, and TinyImageNet datasets, showing that our
method performs on par or better than existing decision models.
Related papers
- Construction of Decision Trees and Acyclic Decision Graphs from Decision
Rule Systems [0.0]
We study the complexity of constructing decision trees and acyclic decision graphs representing decision trees from decision rule systems.
We discuss the possibility of not building the entire decision tree, but describing the computation path in this tree for the given input.
arXiv Detail & Related papers (2023-05-02T18:40:48Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Deep Manifold Learning with Graph Mining [80.84145791017968]
We propose a novel graph deep model with a non-gradient decision layer for graph mining.
The proposed model has achieved state-of-the-art performance compared to the current models.
arXiv Detail & Related papers (2022-07-18T04:34:08Z) - Optimal Decision Diagrams for Classification [68.72078059880018]
We study the training of optimal decision diagrams from a mathematical programming perspective.
We introduce a novel mixed-integer linear programming model for training.
We show how this model can be easily extended for fairness, parsimony, and stability notions.
arXiv Detail & Related papers (2022-05-28T18:31:23Z) - EIGNN: Efficient Infinite-Depth Graph Neural Networks [51.97361378423152]
Graph neural networks (GNNs) are widely used for modelling graph-structured data in numerous applications.
Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN)
We show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T08:16:58Z) - Tree in Tree: from Decision Trees to Decision Graphs [2.2336243882030025]
Tree in Tree decision graph (TnT) is a framework that extends the conventional decision tree to a more generic and powerful directed acyclic graph.
Our proposed model is a novel, more efficient, and accurate alternative to the widely-used decision trees.
arXiv Detail & Related papers (2021-10-01T13:20:05Z) - Dive into Decision Trees and Forests: A Theoretical Demonstration [0.0]
Decision trees use the strategy of "divide-and-conquer" to divide a complex problem on the dependency between input features and labels into smaller ones.
Recent advances have greatly improved their performance in computational advertising, recommender system, information retrieval, etc.
arXiv Detail & Related papers (2021-01-20T16:47:59Z) - Growing Deep Forests Efficiently with Soft Routing and Learned
Connectivity [79.83903179393164]
This paper further extends the deep forest idea in several important aspects.
We employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., soft routing, rather than hard binary decisions.
Experiments on the MNIST dataset demonstrate that our empowered deep forests can achieve better or comparable performance than [1],[3].
arXiv Detail & Related papers (2020-12-29T18:05:05Z) - Learning Representations for Axis-Aligned Decision Forests through Input
Perturbation [2.755007887718791]
Axis-aligned decision forests have long been the leading class of machine learning algorithms.
Despite their widespread use and rich history, decision forests to date fail to consume raw structured data.
We present a novel but intuitive proposal to achieve representation learning for decision forests.
arXiv Detail & Related papers (2020-07-29T11:56:38Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z) - dtControl: Decision Tree Learning Algorithms for Controller
Representation [0.0]
Decision trees can be used to represent provably-correct controllers concisely.
We present dtControl, an easily synthesised tool for representing memoryless controllers as decision trees.
arXiv Detail & Related papers (2020-02-12T17:13:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.