Tree Echo State Autoencoders with Grammars
- URL: http://arxiv.org/abs/2004.08925v1
- Date: Sun, 19 Apr 2020 18:04:33 GMT
- Title: Tree Echo State Autoencoders with Grammars
- Authors: Benjamin Paassen, Irena Koprinska, Kalina Yacef
- Abstract summary: Non-vectorial and discrete nature of trees makes it challenging to construct functions with tree-formed output.
Existing autoencoding approaches fail to take the specific grammatical structure of tree domains into account.
We propose tree echo state autoencoders (TES-AE), which are guided by a tree grammar and can be trained within seconds by virtue of reservoir computing.
- Score: 3.7280152311394827
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tree data occurs in many forms, such as computer programs, chemical
molecules, or natural language. Unfortunately, the non-vectorial and discrete
nature of trees makes it challenging to construct functions with tree-formed
output, complicating tasks such as optimization or time series prediction.
Autoencoders address this challenge by mapping trees to a vectorial latent
space, where tasks are easier to solve, and then mapping the solution back to a
tree structure. However, existing autoencoding approaches for tree data fail to
take the specific grammatical structure of tree domains into account and rely
on deep learning, thus requiring large training datasets and long training
times. In this paper, we propose tree echo state autoencoders (TES-AE), which
are guided by a tree grammar and can be trained within seconds by virtue of
reservoir computing. In our evaluation on three datasets, we demonstrate that
our proposed approach is not only much faster than a state-of-the-art deep
learning autoencoding approach (D-VAE) but also has less autoencoding error if
little data and time is given.
Related papers
- Learning a Decision Tree Algorithm with Transformers [75.96920867382859]
We introduce MetaTree, a transformer-based model trained via meta-learning to directly produce strong decision trees.
We fit both greedy decision trees and globally optimized decision trees on a large number of datasets, and train MetaTree to produce only the trees that achieve strong generalization performance.
arXiv Detail & Related papers (2024-02-06T07:40:53Z) - Tree Prompting: Efficient Task Adaptation without Fine-Tuning [112.71020326388029]
Tree Prompting builds a decision tree of prompts, linking multiple LM calls together to solve a task.
Experiments on classification datasets show that Tree Prompting improves accuracy over competing methods and is competitive with fine-tuning.
arXiv Detail & Related papers (2023-10-21T15:18:22Z) - New Linear-time Algorithm for SubTree Kernel Computation based on
Root-Weighted Tree Automata [0.0]
We propose a new linear time algorithm based on the concept of weighted tree automata for SubTree kernel computation.
Key idea behind the proposed algorithm is to replace DAG reduction and nodes sorting steps.
Our approach has three major advantages: it is output-sensitive, it is free sensitive from the tree types (ordered trees versus unordered trees), and it is well adapted to any incremental tree kernel based learning methods.
arXiv Detail & Related papers (2023-02-02T13:37:48Z) - Structure-Unified M-Tree Coding Solver for MathWord Problem [57.825176412485504]
In previous work, models designed by taking into account the properties of the binary tree structure of mathematical expressions at the output side have achieved better performance.
In this paper, we propose the Structure-Unified M-Tree Coding Coding (S-UMCr), which applies a tree with any M branches (M-tree) to unify the output structures.
Experimental results on the widely used MAWPS and Math23K datasets have demonstrated that SUMC-r not only outperforms several state-of-the-art models but also performs much better under low-resource conditions.
arXiv Detail & Related papers (2022-10-22T12:20:36Z) - Autoencoders as Tools for Program Synthesis [0.43012765978447565]
We introduce a variational autoencoder model for program synthesis of industry-grade programming languages.
Our model incorporates the internal hierarchical structure of source codes and operates on parse trees.
arXiv Detail & Related papers (2021-08-16T14:51:11Z) - Visualizing hierarchies in scRNA-seq data using a density tree-biased
autoencoder [50.591267188664666]
We propose an approach for identifying a meaningful tree structure from high-dimensional scRNA-seq data.
We then introduce DTAE, a tree-biased autoencoder that emphasizes the tree structure of the data in low dimensional space.
arXiv Detail & Related papers (2021-02-11T08:48:48Z) - Recursive Tree Grammar Autoencoders [3.791857415239352]
We propose a novel autoencoder approach that encodes trees via a bottom-up grammar and decodes trees via a tree grammar.
We show experimentally that our proposed method improves the autoencoding error, training time, and optimization score on four benchmark datasets.
arXiv Detail & Related papers (2020-12-03T17:37:25Z) - Recursive Top-Down Production for Sentence Generation with Latent Trees [77.56794870399288]
We model the production property of context-free grammars for natural and synthetic languages.
We present a dynamic programming algorithm that marginalises over latent binary tree structures with $N$ leaves.
We also present experimental results on German-English translation on the Multi30k dataset.
arXiv Detail & Related papers (2020-10-09T17:47:16Z) - TreeCaps: Tree-Based Capsule Networks for Source Code Processing [28.61567319928316]
We propose a new learning technique, named TreeCaps, by fusing together capsule networks with tree-based convolutional neural networks.
We find that TreeCaps is the most robust to withstand those semantic-preserving program transformations.
arXiv Detail & Related papers (2020-09-05T16:37:19Z) - Born-Again Tree Ensembles [9.307453801175177]
Tree ensembles offer a good prediction quality in various domains, but the concurrent use of multiple trees reduces the interpretability of the ensemble.
We study the process of constructing a single decision tree of minimum size that reproduces the exact same behavior as a given tree ensemble in its entire feature space.
This algorithm generates optimal born-again trees for many datasets of practical interest.
arXiv Detail & Related papers (2020-03-24T22:17:21Z) - Tree-structured Attention with Hierarchical Accumulation [103.47584968330325]
"Hierarchical Accumulation" encodes parse tree structures into self-attention at constant time complexity.
Our approach outperforms SOTA methods in four IWSLT translation tasks and the WMT'14 English-German translation task.
arXiv Detail & Related papers (2020-02-19T08:17:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.