Learning compositional structures for semantic graph parsing
- URL: http://arxiv.org/abs/2106.04398v1
- Date: Tue, 8 Jun 2021 14:20:07 GMT
- Title: Learning compositional structures for semantic graph parsing
- Authors: Jonas Groschwitz, Meaghan Fowlie and Alexander Koller
- Abstract summary: We show how AM dependency parsing can be trained directly on a neural latent-variable model.
Our model picks up on several linguistic phenomena on its own and achieves comparable accuracy to supervised training.
- Score: 81.41592892863979
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AM dependency parsing is a method for neural semantic graph parsing that
exploits the principle of compositionality. While AM dependency parsers have
been shown to be fast and accurate across several graphbanks, they require
explicit annotations of the compositional tree structures for training. In the
past, these were obtained using complex graphbank-specific heuristics written
by experts. Here we show how they can instead be trained directly on the graphs
with a neural latent-variable model, drastically reducing the amount and
complexity of manual heuristics. We demonstrate that our model picks up on
several linguistic phenomena on its own and achieves comparable accuracy to
supervised training, greatly facilitating the use of AM dependency parsing for
new sembanks.
Related papers
- S$^2$GSL: Incorporating Segment to Syntactic Enhanced Graph Structure Learning for Aspect-based Sentiment Analysis [19.740223755240734]
We propose S$2$GSL, incorporating Segment to Syntactic enhanced Graph Structure Learning for ABSA.
S$2$GSL is featured with a segment-aware semantic graph learning and a syntax-based latent graph learning.
arXiv Detail & Related papers (2024-06-05T03:44:35Z) - Discrete Latent Structure in Neural Networks [21.890439357275696]
This text explores three broad strategies for learning with discrete latent structure.
We show how most consist of the same small set of fundamental building blocks, but use them differently, leading to substantially different applicability and properties.
arXiv Detail & Related papers (2023-01-18T12:30:44Z) - Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph
Construction [57.854498238624366]
We propose a retrieval-augmented approach, which retrieves schema-aware Reference As Prompt (RAP) for data-efficient knowledge graph construction.
RAP can dynamically leverage schema and knowledge inherited from human-annotated and weak-supervised data as a prompt for each sample.
arXiv Detail & Related papers (2022-10-19T16:40:28Z) - Explanation Graph Generation via Pre-trained Language Models: An
Empirical Study with Contrastive Learning [84.35102534158621]
We study pre-trained language models that generate explanation graphs in an end-to-end manner.
We propose simple yet effective ways of graph perturbations via node and edge edit operations.
Our methods lead to significant improvements in both structural and semantic accuracy of explanation graphs.
arXiv Detail & Related papers (2022-04-11T00:58:27Z) - Unsupervised Learning of Explainable Parse Trees for Improved
Generalisation [15.576061447736057]
We propose an attention mechanism over Tree-LSTMs to learn more meaningful and explainable parse tree structures.
We also demonstrate the superior performance of our proposed model on natural language inference, semantic relatedness, and sentiment analysis tasks.
arXiv Detail & Related papers (2021-04-11T12:10:03Z) - Structural Adapters in Pretrained Language Models for AMR-to-text
Generation [59.50420985074769]
Previous work on text generation from graph-structured data relies on pretrained language models (PLMs)
We propose StructAdapt, an adapter method to encode graph structure into PLMs.
arXiv Detail & Related papers (2021-03-16T15:06:50Z) - Graph Ensemble Learning over Multiple Dependency Trees for Aspect-level
Sentiment Classification [37.936820137442254]
We propose a simple yet effective graph ensemble technique, GraphMerge, to make use of the predictions from differ-ent relations.
Instead of assigning one set of model parameters to each dependency tree, we first combine the dependency from different parses before applying GNNs over the resulting graph.
Our experiments on the SemEval 2014 Task 4 and ACL 14 Twitter datasets show that our GraphMerge model not only outperforms models with single dependency tree, but also beats other ensemble mod-els without adding model parameters.
arXiv Detail & Related papers (2021-03-12T22:27:23Z) - Learning the Implicit Semantic Representation on Graph-Structured Data [57.670106959061634]
Existing representation learning methods in graph convolutional networks are mainly designed by describing the neighborhood of each node as a perceptual whole.
We propose a Semantic Graph Convolutional Networks (SGCN) that explores the implicit semantics by learning latent semantic-paths in graphs.
arXiv Detail & Related papers (2021-01-16T16:18:43Z) - Towards Interpretable Multi-Task Learning Using Bilevel Programming [18.293397644865454]
Interpretable Multi-Task Learning can be expressed as learning a sparse graph of the task relationship based on the prediction performance of the learned models.
We show empirically how the induced sparse graph improves the interpretability of the learned models and their relationship on synthetic and real data, without sacrificing generalization performance.
arXiv Detail & Related papers (2020-09-11T15:04:27Z) - Structural Landmarking and Interaction Modelling: on Resolution Dilemmas
in Graph Classification [50.83222170524406]
We study the intrinsic difficulty in graph classification under the unified concept of resolution dilemmas''
We propose SLIM'', an inductive neural network model for Structural Landmarking and Interaction Modelling.
arXiv Detail & Related papers (2020-06-29T01:01:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.