Highly Efficient Structural Learning of Sparse Staged Trees
- URL: http://arxiv.org/abs/2206.06970v1
- Date: Tue, 14 Jun 2022 16:46:13 GMT
- Title: Highly Efficient Structural Learning of Sparse Staged Trees
- Authors: Manuele Leonelli, Gherardo Varando
- Abstract summary: We introduce the first scalable structural learning algorithm for staged trees, which searches over a space of models where only a small number of dependencies can be imposed.
A simulation study as well as a real-world application illustrate our routines and the practical use of such data-learned staged trees.
- Score: 2.3572498744567127
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Several structural learning algorithms for staged tree models, an asymmetric
extension of Bayesian networks, have been defined. However, they do not scale
efficiently as the number of variables considered increases. Here we introduce
the first scalable structural learning algorithm for staged trees, which
searches over a space of models where only a small number of dependencies can
be imposed. A simulation study as well as a real-world application illustrate
our routines and the practical use of such data-learned staged trees.
Related papers
- Modern Neighborhood Components Analysis: A Deep Tabular Baseline Two Decades Later [59.88557193062348]
We revisit the classic Neighborhood Component Analysis (NCA), designed to learn a linear projection that captures semantic similarities between instances.
We find that minor modifications, such as adjustments to the learning objectives and the integration of deep learning architectures, significantly enhance NCA's performance.
We also introduce a neighbor sampling strategy that improves both the efficiency and predictive accuracy of our proposed ModernNCA.
arXiv Detail & Related papers (2024-07-03T16:38:57Z) - Terminating Differentiable Tree Experts [77.2443883991608]
We propose a neuro-symbolic Differentiable Tree Machine that learns tree operations using a combination of transformers and Representation Products.
We first remove a series of different transformer layers that are used in every step by introducing a mixture of experts.
We additionally propose a new termination algorithm to provide the model the power to choose how many steps to make automatically.
arXiv Detail & Related papers (2024-07-02T08:45:38Z) - LiteSearch: Efficacious Tree Search for LLM [70.29796112457662]
This study introduces a novel guided tree search algorithm with dynamic node selection and node-level exploration budget.
Experiments conducted on the GSM8K and TabMWP datasets demonstrate that our approach enjoys significantly lower computational costs compared to baseline methods.
arXiv Detail & Related papers (2024-06-29T05:14:04Z) - GrootVL: Tree Topology is All You Need in State Space Model [66.36757400689281]
GrootVL is a versatile multimodal framework that can be applied to both visual and textual tasks.
Our method significantly outperforms existing structured state space models on image classification, object detection and segmentation.
By fine-tuning large language models, our approach achieves consistent improvements in multiple textual tasks at minor training cost.
arXiv Detail & Related papers (2024-06-04T15:09:29Z) - Learning Staged Trees from Incomplete Data [1.6327794667678908]
We introduce the first algorithms for staged trees that handle missingness within the learning of the model.
A computational experiment showcases the performance of the novel learning algorithms.
arXiv Detail & Related papers (2024-05-28T16:00:23Z) - A generalized decision tree ensemble based on the NeuralNetworks
architecture: Distributed Gradient Boosting Forest (DGBF) [0.0]
We present a graph-structured-tree-ensemble algorithm with a distributed representation learning process between trees naturally.
We call this novel approach Distributed Gradient Boosting Forest (DGBF) and we demonstrate that both RandomForest and GradientBoosting can be expressed as particular graph architectures of DGBF.
Finally, we see that the distributed learning outperforms both RandomForest and GradientBoosting in 7 out of 9 datasets.
arXiv Detail & Related papers (2024-02-04T09:22:52Z) - Structural Learning of Simple Staged Trees [2.3572498744567127]
We introduce the first structural learning algorithms for the class of simple staged trees.
We show that data-learned simple staged trees often outperform Bayesian networks in model fit.
arXiv Detail & Related papers (2022-03-08T20:50:39Z) - Growing Deep Forests Efficiently with Soft Routing and Learned
Connectivity [79.83903179393164]
This paper further extends the deep forest idea in several important aspects.
We employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., soft routing, rather than hard binary decisions.
Experiments on the MNIST dataset demonstrate that our empowered deep forests can achieve better or comparable performance than [1],[3].
arXiv Detail & Related papers (2020-12-29T18:05:05Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z) - The R Package stagedtrees for Structural Learning of Stratified Staged
Trees [1.9199289015460215]
stagedtrees is an R package which includes several algorithms for learning the structure of staged trees and chain event graphs from data.
The capabilities of stagedtrees are illustrated using mainly two datasets both included in the package or bundled in R.
arXiv Detail & Related papers (2020-04-14T13:02:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.