DRTS Parsing with Structure-Aware Encoding and Decoding
- URL: http://arxiv.org/abs/2005.06901v1
- Date: Thu, 14 May 2020 12:09:23 GMT
- Title: DRTS Parsing with Structure-Aware Encoding and Decoding
- Authors: Qiankun Fu and Yue Zhang and Jiangming Liu and Meishan Zhang
- Abstract summary: State-of-the-art performance can be achieved by a neural sequence-to-sequence model.
We propose a structural-aware model at both the encoder and decoder phase to integrate the structural information.
- Score: 28.711318411470497
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Discourse representation tree structure (DRTS) parsing is a novel semantic
parsing task which has been concerned most recently. State-of-the-art
performance can be achieved by a neural sequence-to-sequence model, treating
the tree construction as an incremental sequence generation problem. Structural
information such as input syntax and the intermediate skeleton of the partial
output has been ignored in the model, which could be potentially useful for the
DRTS parsing. In this work, we propose a structural-aware model at both the
encoder and decoder phase to integrate the structural information, where graph
attention network (GAT) is exploited for effectively modeling. Experimental
results on a benchmark dataset show that our proposed model is effective and
can obtain the best performance in the literature.
Related papers
- GraphER: A Structure-aware Text-to-Graph Model for Entity and Relation Extraction [3.579132482505273]
Information extraction is an important task in Natural Language Processing (NLP)
We propose a novel approach to this task by formulating it as graph structure learning (GSL)
This formulation allows for better interaction and structure-informed decisions for entity and relation prediction.
arXiv Detail & Related papers (2024-04-18T20:09:37Z) - Syntax-Aware Complex-Valued Neural Machine Translation [14.772317918560548]
We propose a method to incorporate syntax information into a complex-valued-Decoder architecture.
The proposed model jointly learns word-level and syntax-level attention scores from the source side to the target side using an attention mechanism.
The experimental results demonstrate that the proposed method can bring significant improvements in BLEU scores on two datasets.
arXiv Detail & Related papers (2023-07-17T15:58:05Z) - Physics of Language Models: Part 1, Learning Hierarchical Language Structures [51.68385617116854]
Transformer-based language models are effective but complex, and understanding their inner workings is a significant challenge.
We introduce a family of synthetic CFGs that produce hierarchical rules, capable of generating lengthy sentences.
We demonstrate that generative models like GPT can accurately learn this CFG language and generate sentences based on it.
arXiv Detail & Related papers (2023-05-23T04:28:16Z) - StrAE: Autoencoding for Pre-Trained Embeddings using Explicit Structure [5.2869308707704255]
StrAE is a Structured Autoencoder framework that through strict adherence to explicit structure, enables effective learning of multi-level representations.
We show that our results are directly attributable to the informativeness of the structure provided as input, and show that this is not the case for existing tree models.
We then extend StrAE to allow the model to define its own compositions using a simple localised-merge algorithm.
arXiv Detail & Related papers (2023-05-09T16:20:48Z) - Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal
Structured Representations [70.41385310930846]
We present an end-to-end framework Structure-CLIP to enhance multi-modal structured representations.
We use scene graphs to guide the construction of semantic negative examples, which results in an increased emphasis on learning structured representations.
A Knowledge-Enhance (KEE) is proposed to leverage SGK as input to further enhance structured representations.
arXiv Detail & Related papers (2023-05-06T03:57:05Z) - Autoregressive Structured Prediction with Language Models [73.11519625765301]
We describe an approach to model structures as sequences of actions in an autoregressive manner with PLMs.
Our approach achieves the new state-of-the-art on all the structured prediction tasks we looked at.
arXiv Detail & Related papers (2022-10-26T13:27:26Z) - Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph
Construction [57.854498238624366]
We propose a retrieval-augmented approach, which retrieves schema-aware Reference As Prompt (RAP) for data-efficient knowledge graph construction.
RAP can dynamically leverage schema and knowledge inherited from human-annotated and weak-supervised data as a prompt for each sample.
arXiv Detail & Related papers (2022-10-19T16:40:28Z) - Code Representation Learning with Pr\"ufer Sequences [2.2463154358632464]
An effective encoding of the source code of a computer program is critical to the success of sequence-to-sequence deep neural network models.
We propose to use the Pr"ufer sequence of the Abstract Syntax Tree (AST) of a computer program to design a sequential representation scheme.
Our representation makes it possible to develop deep-learning models in which signals carried by lexical tokens in the training examples can be exploited automatically and selectively.
arXiv Detail & Related papers (2021-11-14T07:27:38Z) - Operation Embeddings for Neural Architecture Search [15.033712726016255]
We propose the replacement of fixed operator encoding with learnable representations in the optimization process.
Our method produces top-performing architectures that share similar operation and graph patterns.
arXiv Detail & Related papers (2021-05-11T09:17:10Z) - AutoSTR: Efficient Backbone Search for Scene Text Recognition [80.7290173000068]
Scene text recognition (STR) is very challenging due to the diversity of text instances and the complexity of scenes.
We propose automated STR (AutoSTR) to search data-dependent backbones to boost text recognition performance.
Experiments demonstrate that, by searching data-dependent backbones, AutoSTR can outperform the state-of-the-art approaches on standard benchmarks.
arXiv Detail & Related papers (2020-03-14T06:51:04Z) - Tree-structured Attention with Hierarchical Accumulation [103.47584968330325]
"Hierarchical Accumulation" encodes parse tree structures into self-attention at constant time complexity.
Our approach outperforms SOTA methods in four IWSLT translation tasks and the WMT'14 English-German translation task.
arXiv Detail & Related papers (2020-02-19T08:17:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.