On the Challenges of Fully Incremental Neural Dependency Parsing
- URL: http://arxiv.org/abs/2309.16254v1
- Date: Thu, 28 Sep 2023 08:44:08 GMT
- Title: On the Challenges of Fully Incremental Neural Dependency Parsing
- Authors: Ana Ezquerro, Carlos G\'omez-Rodr\'iguez, David Vilares
- Abstract summary: Since the popularization of BiLSTMs and Transformer-based bidirectional encoders, state-of-the-art syntactics have lacked incrementality.
This paper explores whether fully incremental dependency parsing with modern architectures can be competitive.
We build bidirectionals combining strictly left-to-right neural encoders with fully incremental sequence-labeling and transition-based decoders.
- Score: 7.466159270333272
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Since the popularization of BiLSTMs and Transformer-based bidirectional
encoders, state-of-the-art syntactic parsers have lacked incrementality,
requiring access to the whole sentence and deviating from human language
processing. This paper explores whether fully incremental dependency parsing
with modern architectures can be competitive. We build parsers combining
strictly left-to-right neural encoders with fully incremental sequence-labeling
and transition-based decoders. The results show that fully incremental parsing
with modern architectures considerably lags behind bidirectional parsing,
noting the challenges of psycholinguistically plausible parsing.
Related papers
- Sentiment analysis in Tourism: Fine-tuning BERT or sentence embeddings
concatenation? [0.0]
We conduct a comparative study between Fine-Tuning the Bidirectional Representations from Transformers and a method of concatenating two embeddings to boost the performance of a stacked Bidirectional Long Short-Term Memory-Bidirectional Gated Recurrent Units model.
A search for the best learning rate was made at the level of the two approaches, and a comparison of the best embeddings was made for each sentence embedding combination.
arXiv Detail & Related papers (2023-12-12T23:23:23Z) - Tram: A Token-level Retrieval-augmented Mechanism for Source Code Summarization [76.57699934689468]
We propose a fine-grained Token-level retrieval-augmented mechanism (Tram) on the decoder side to enhance the performance of neural models.
To overcome the challenge of token-level retrieval in capturing contextual code semantics, we also propose integrating code semantics into individual summary tokens.
arXiv Detail & Related papers (2023-05-18T16:02:04Z) - Real-World Compositional Generalization with Disentangled
Sequence-to-Sequence Learning [81.24269148865555]
A recently proposed Disentangled sequence-to-sequence model (Dangle) shows promising generalization capability.
We introduce two key modifications to this model which encourage more disentangled representations and improve its compute and memory efficiency.
Specifically, instead of adaptively re-encoding source keys and values at each time step, we disentangle their representations and only re-encode keys periodically.
arXiv Detail & Related papers (2022-12-12T15:40:30Z) - Enhanced Seq2Seq Autoencoder via Contrastive Learning for Abstractive
Text Summarization [15.367455931848252]
We present a sequence-to-sequence (seq2seq) autoencoder via contrastive learning for abstractive text summarization.
Our model adopts a standard Transformer-based architecture with a multi-layer bi-directional encoder and an auto-regressive decoder.
We conduct experiments on two datasets and demonstrate that our model outperforms many existing benchmarks.
arXiv Detail & Related papers (2021-08-26T18:45:13Z) - Dependency Parsing with Bottom-up Hierarchical Pointer Networks [0.7412445894287709]
Left-to-right and top-down transition-based algorithms are among the most accurate approaches for performing dependency parsing.
We propose two novel transition-based alternatives: an approach that parses a sentence in right-to-left order and a variant that does it from the outside in.
We empirically test the proposed neural architecture with the different algorithms on a wide variety of languages, outperforming the original approach in practically all of them.
arXiv Detail & Related papers (2021-05-20T09:10:42Z) - Enriching Non-Autoregressive Transformer with Syntactic and
SemanticStructures for Neural Machine Translation [54.864148836486166]
We propose to incorporate the explicit syntactic and semantic structures of languages into a non-autoregressive Transformer.
Our model achieves a significantly faster speed, as well as keeps the translation quality when compared with several state-of-the-art non-autoregressive models.
arXiv Detail & Related papers (2021-01-22T04:12:17Z) - Syntactic representation learning for neural network based TTS with
syntactic parse tree traversal [49.05471750563229]
We propose a syntactic representation learning method based on syntactic parse tree to automatically utilize the syntactic structure information.
Experimental results demonstrate the effectiveness of our proposed approach.
For sentences with multiple syntactic parse trees, prosodic differences can be clearly perceived from the synthesized speeches.
arXiv Detail & Related papers (2020-12-13T05:52:07Z) - A Unifying Theory of Transition-based and Sequence Labeling Parsing [14.653008985229617]
We map transition-based parsing algorithms that read sentences from left to right to sequence labeling encodings of syntactic trees.
This establishes a theoretical relation between transition-based parsing and sequence-labeling parsing.
We implement sequence labeling versions of four algorithms, showing that they are learnable and obtain comparable performance to existing encodings.
arXiv Detail & Related papers (2020-11-01T18:25:15Z) - Hierarchical Poset Decoding for Compositional Generalization in Language [52.13611501363484]
We formalize human language understanding as a structured prediction task where the output is a partially ordered set (poset)
Current encoder-decoder architectures do not take the poset structure of semantics into account properly.
We propose a novel hierarchical poset decoding paradigm for compositional generalization in language.
arXiv Detail & Related papers (2020-10-15T14:34:26Z) - Incremental Processing in the Age of Non-Incremental Encoders: An Empirical Assessment of Bidirectional Models for Incremental NLU [19.812562421377706]
bidirectional LSTMs and Transformers assume that the sequence that is to be encoded is available in full.
We investigate how they behave under incremental interfaces, when partial output must be provided.
Results support the possibility of using bidirectional encoders in incremental mode while retaining most of their non-incremental quality.
arXiv Detail & Related papers (2020-10-11T19:51:21Z) - Bi-Decoder Augmented Network for Neural Machine Translation [108.3931242633331]
We propose a novel Bi-Decoder Augmented Network (BiDAN) for the neural machine translation task.
Since each decoder transforms the representations of the input text into its corresponding language, jointly training with two target ends can make the shared encoder has the potential to produce a language-independent semantic space.
arXiv Detail & Related papers (2020-01-14T02:05:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.