Tensor Decompositions in Recursive Neural Networks for Tree-Structured
Data
- URL: http://arxiv.org/abs/2006.10619v2
- Date: Thu, 13 Aug 2020 12:03:25 GMT
- Title: Tensor Decompositions in Recursive Neural Networks for Tree-Structured
Data
- Authors: Daniele Castellana and Davide Bacciu
- Abstract summary: We introduce two new aggregation functions to encode structural knowledge from tree-structured data.
We test them on two tree classification tasks, showing the advantage of proposed models when tree outdegree increases.
- Score: 12.069862650316262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paper introduces two new aggregation functions to encode structural
knowledge from tree-structured data. They leverage the Canonical and
Tensor-Train decompositions to yield expressive context aggregation while
limiting the number of model parameters. Finally, we define two novel neural
recursive models for trees leveraging such aggregation functions, and we test
them on two tree classification tasks, showing the advantage of proposed models
when tree outdegree increases.
Related papers
- Forecasting with Hyper-Trees [50.72190208487953]
Hyper-Trees are designed to learn the parameters of time series models.
By relating the parameters of a target time series model to features, Hyper-Trees also address the issue of parameter non-stationarity.
In this novel approach, the trees first generate informative representations from the input features, which a shallow network then maps to the target model parameters.
arXiv Detail & Related papers (2024-05-13T15:22:15Z) - Tree Variational Autoencoders [5.992683455757179]
We propose a new generative hierarchical clustering model that learns a flexible tree-based posterior distribution over latent variables.
TreeVAE hierarchically divides samples according to their intrinsic characteristics, shedding light on hidden structures in the data.
arXiv Detail & Related papers (2023-06-15T09:25:04Z) - Hierarchical clustering with dot products recovers hidden tree structure [53.68551192799585]
In this paper we offer a new perspective on the well established agglomerative clustering algorithm, focusing on recovery of hierarchical structure.
We recommend a simple variant of the standard algorithm, in which clusters are merged by maximum average dot product and not, for example, by minimum distance or within-cluster variance.
We demonstrate that the tree output by this algorithm provides a bona fide estimate of generative hierarchical structure in data, under a generic probabilistic graphical model.
arXiv Detail & Related papers (2023-05-24T11:05:12Z) - A Tree-structured Transformer for Program Representation Learning [27.31416015946351]
Long-term/global dependencies widely exist in programs, and most neural networks fail to capture these dependencies.
In this paper, we propose Tree-Transformer, a novel tree-structured neural network which aims to overcome the above limitations.
By combining bottom-up and top-down propagation, Tree-Transformer can learn both global contexts and meaningful node features.
arXiv Detail & Related papers (2022-08-18T05:42:01Z) - Visualizing hierarchies in scRNA-seq data using a density tree-biased
autoencoder [50.591267188664666]
We propose an approach for identifying a meaningful tree structure from high-dimensional scRNA-seq data.
We then introduce DTAE, a tree-biased autoencoder that emphasizes the tree structure of the data in low dimensional space.
arXiv Detail & Related papers (2021-02-11T08:48:48Z) - Robust estimation of tree structured models [0.0]
We show that it is possible to recover trees from noisy binary data up to a small equivalence class of possible trees.
We also provide a characterisation of when the Chow-Liu algorithm consistently learns the underlying tree from the noisy data.
arXiv Detail & Related papers (2021-02-10T14:58:40Z) - SGA: A Robust Algorithm for Partial Recovery of Tree-Structured
Graphical Models with Noisy Samples [75.32013242448151]
We consider learning Ising tree models when the observations from the nodes are corrupted by independent but non-identically distributed noise.
Katiyar et al. (2020) showed that although the exact tree structure cannot be recovered, one can recover a partial tree structure.
We propose Symmetrized Geometric Averaging (SGA), a more statistically robust algorithm for partial tree recovery.
arXiv Detail & Related papers (2021-01-22T01:57:35Z) - Growing Deep Forests Efficiently with Soft Routing and Learned
Connectivity [79.83903179393164]
This paper further extends the deep forest idea in several important aspects.
We employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., soft routing, rather than hard binary decisions.
Experiments on the MNIST dataset demonstrate that our empowered deep forests can achieve better or comparable performance than [1],[3].
arXiv Detail & Related papers (2020-12-29T18:05:05Z) - Learning from Non-Binary Constituency Trees via Tensor Decomposition [12.069862650316262]
We introduce a new approach to deal with non-binary constituency trees.
We show how a powerful composition function based on the canonical tensor decomposition can exploit such a rich structure.
We experimentally assess its performance on different NLP tasks.
arXiv Detail & Related papers (2020-11-02T10:06:59Z) - Recursive Top-Down Production for Sentence Generation with Latent Trees [77.56794870399288]
We model the production property of context-free grammars for natural and synthetic languages.
We present a dynamic programming algorithm that marginalises over latent binary tree structures with $N$ leaves.
We also present experimental results on German-English translation on the Multi30k dataset.
arXiv Detail & Related papers (2020-10-09T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.