Transition-based Bubble Parsing: Improvements on Coordination Structure
Prediction
- URL: http://arxiv.org/abs/2107.06905v1
- Date: Wed, 14 Jul 2021 18:00:05 GMT
- Title: Transition-based Bubble Parsing: Improvements on Coordination Structure
Prediction
- Authors: Tianze Shi, Lillian Lee
- Abstract summary: We introduce a transition system and neural models for parsing bubble-enhanced structures.
Experimental results on the English Penn Treebank and the English GENIA corpus show that ours beat previous state-of-the-art approaches on the task of coordination structure prediction.
- Score: 18.71574180551552
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a transition-based bubble parser to perform coordination structure
identification and dependency-based syntactic analysis simultaneously. Bubble
representations were proposed in the formal linguistics literature decades ago;
they enhance dependency trees by encoding coordination boundaries and internal
relationships within coordination structures explicitly. In this paper, we
introduce a transition system and neural models for parsing these
bubble-enhanced structures. Experimental results on the English Penn Treebank
and the English GENIA corpus show that our parsers beat previous
state-of-the-art approaches on the task of coordination structure prediction,
especially for the subset of sentences with complex coordination structures.
Related papers
- Constructive Approach to Bidirectional Causation between Qualia Structure and Language Emergence [5.906966694759679]
This paper presents a novel perspective on the bidirectional causation between language emergence and relational structure of subjective experiences.
We hypothesize that languages with distributional semantics, e.g., syntactic-semantic structures, may have emerged through the process of aligning internal representations among individuals.
arXiv Detail & Related papers (2024-09-14T11:03:12Z) - Character-Level Chinese Dependency Parsing via Modeling Latent Intra-Word Structure [11.184330703168893]
This paper proposes modeling latent internal structures within words in Chinese.
A constrained Eisner algorithm is implemented to ensure the compatibility of character-level trees.
A detailed analysis reveals that a coarse-to-fine parsing strategy empowers the model to predict more linguistically plausible intra-word structures.
arXiv Detail & Related papers (2024-06-06T06:23:02Z) - Linguistic Structure Induction from Language Models [1.8130068086063336]
This thesis focuses on producing constituency and dependency structures from Language Models (LMs) in an unsupervised setting.
I present a detailed study on StructFormer (SF) which retrofits a transformer architecture with a encoder network to produce constituency and dependency structures.
I present six experiments to analyze and address this field's challenges.
arXiv Detail & Related papers (2024-03-11T16:54:49Z) - Unsupervised Chunking with Hierarchical RNN [62.15060807493364]
This paper introduces an unsupervised approach to chunking, a syntactic task that involves grouping words in a non-hierarchical manner.
We present a two-layer Hierarchical Recurrent Neural Network (HRNN) designed to model word-to-chunk and chunk-to-sentence compositions.
Experiments on the CoNLL-2000 dataset reveal a notable improvement over existing unsupervised methods, enhancing phrase F1 score by up to 6 percentage points.
arXiv Detail & Related papers (2023-09-10T02:55:12Z) - Revisiting Conversation Discourse for Dialogue Disentanglement [88.3386821205896]
We propose enhancing dialogue disentanglement by taking full advantage of the dialogue discourse characteristics.
We develop a structure-aware framework to integrate the rich structural features for better modeling the conversational semantic context.
Our work has great potential to facilitate broader multi-party multi-thread dialogue applications.
arXiv Detail & Related papers (2023-06-06T19:17:47Z) - Variational Cross-Graph Reasoning and Adaptive Structured Semantics
Learning for Compositional Temporal Grounding [143.5927158318524]
Temporal grounding is the task of locating a specific segment from an untrimmed video according to a query sentence.
We introduce a new Compositional Temporal Grounding task and construct two new dataset splits.
We argue that the inherent structured semantics inside the videos and language is the crucial factor to achieve compositional generalization.
arXiv Detail & Related papers (2023-01-22T08:02:23Z) - Compositional Generalization Requires Compositional Parsers [69.77216620997305]
We compare sequence-to-sequence models and models guided by compositional principles on the recent COGS corpus.
We show structural generalization is a key measure of compositional generalization and requires models that are aware of complex structure.
arXiv Detail & Related papers (2022-02-24T07:36:35Z) - Learning compositional structures for semantic graph parsing [81.41592892863979]
We show how AM dependency parsing can be trained directly on a neural latent-variable model.
Our model picks up on several linguistic phenomena on its own and achieves comparable accuracy to supervised training.
arXiv Detail & Related papers (2021-06-08T14:20:07Z) - Hierarchical Poset Decoding for Compositional Generalization in Language [52.13611501363484]
We formalize human language understanding as a structured prediction task where the output is a partially ordered set (poset)
Current encoder-decoder architectures do not take the poset structure of semantics into account properly.
We propose a novel hierarchical poset decoding paradigm for compositional generalization in language.
arXiv Detail & Related papers (2020-10-15T14:34:26Z) - A Hybrid Framework for Topic Structure using Laughter Occurrences [0.3680403821470856]
In this work we combine both paralinguistic and linguistic knowledge into a hybrid framework through a multi-level hierarchy.
The laughter occurrences are used as paralinguistic information from the multiparty meeting transcripts of ICSI database.
This training-free topic structuring approach can be applicable to online understanding of spoken dialogs.
arXiv Detail & Related papers (2019-12-31T23:31:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.