Less is More: A Lightweight and Robust Neural Architecture for Discourse
Parsing
- URL: http://arxiv.org/abs/2210.09537v2
- Date: Fri, 8 Sep 2023 05:37:35 GMT
- Title: Less is More: A Lightweight and Robust Neural Architecture for Discourse
Parsing
- Authors: Ming Li, Ruihong Huang
- Abstract summary: We propose an alternative lightweight neural architecture that removes multiple complex feature extractors and only utilizes learnable self-attention modules.
Experiments on three common discourse parsing tasks show that powered by recent pretrained language models, the lightweight architecture obtains much better generalizability and robustness.
- Score: 27.28989421841165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Complex feature extractors are widely employed for text representation
building. However, these complex feature extractors make the NLP systems prone
to overfitting especially when the downstream training datasets are relatively
small, which is the case for several discourse parsing tasks. Thus, we propose
an alternative lightweight neural architecture that removes multiple complex
feature extractors and only utilizes learnable self-attention modules to
indirectly exploit pretrained neural language models, in order to maximally
preserve the generalizability of pre-trained language models. Experiments on
three common discourse parsing tasks show that powered by recent pretrained
language models, the lightweight architecture consisting of only two
self-attention layers obtains much better generalizability and robustness.
Meanwhile, it achieves comparable or even better system performance with fewer
learnable parameters and less processing time.
Related papers
- Transformers Pretrained on Procedural Data Contain Modular Structures for Algorithmic Reasoning [40.84344912259233]
We identify several beneficial forms of procedural data, together with specific algorithmic reasoning skills that improve in small transformers.<n>Our core finding is that different procedural rules instil distinct but complementary inductive structures in the model.<n>Most interestingly, the structures induced by multiple rules can be composed to jointly impart multiple capabilities.
arXiv Detail & Related papers (2025-05-28T12:50:09Z) - In-Context Language Learning: Architectures and Algorithms [73.93205821154605]
We study ICL through the lens of a new family of model problems we term in context language learning (ICLL)
We evaluate a diverse set of neural sequence models on regular ICLL tasks.
arXiv Detail & Related papers (2024-01-23T18:59:21Z) - Split and Rephrase with Large Language Models [2.499907423888049]
Split and Rephrase (SPRP) task consists in splitting complex sentences into a sequence of shorter grammatical sentences.
We evaluate large language models on the task, showing that they can provide large improvements over the state of the art on the main metrics.
arXiv Detail & Related papers (2023-12-18T10:16:37Z) - Pre-Training to Learn in Context [138.0745138788142]
The ability of in-context learning is not fully exploited because language models are not explicitly trained to learn in context.
We propose PICL (Pre-training for In-Context Learning), a framework to enhance the language models' in-context learning ability.
Our experiments show that PICL is more effective and task-generalizable than a range of baselines, outperforming larger language models with nearly 4x parameters.
arXiv Detail & Related papers (2023-05-16T03:38:06Z) - Language Model Pre-Training with Sparse Latent Typing [66.75786739499604]
We propose a new pre-training objective, Sparse Latent Typing, which enables the model to sparsely extract sentence-level keywords with diverse latent types.
Experimental results show that our model is able to learn interpretable latent type categories in a self-supervised manner without using any external knowledge.
arXiv Detail & Related papers (2022-10-23T00:37:08Z) - Training Naturalized Semantic Parsers with Very Little Data [10.709587018625275]
State-of-the-art (SOTA) semantics are seq2seq architectures based on large language models that have been pretrained on vast amounts of text.
Recent work has explored a reformulation of semantic parsing whereby the output sequences are themselves natural language sentences.
We show that this method delivers new SOTA few-shot performance on the Overnight dataset.
arXiv Detail & Related papers (2022-04-29T17:14:54Z) - Probing Structured Pruning on Multilingual Pre-trained Models: Settings,
Algorithms, and Efficiency [62.0887259003594]
This work investigates three aspects of structured pruning on multilingual pre-trained language models: settings, algorithms, and efficiency.
Experiments on nine downstream tasks show several counter-intuitive phenomena.
We present Dynamic Sparsification, a simple approach that allows training the model once and adapting to different model sizes at inference.
arXiv Detail & Related papers (2022-04-06T06:29:52Z) - Distributionally Robust Recurrent Decoders with Random Network
Distillation [93.10261573696788]
We propose a method based on OOD detection with Random Network Distillation to allow an autoregressive language model to disregard OOD context during inference.
We apply our method to a GRU architecture, demonstrating improvements on multiple language modeling (LM) datasets.
arXiv Detail & Related papers (2021-10-25T19:26:29Z) - GroupBERT: Enhanced Transformer Architecture with Efficient Grouped
Structures [57.46093180685175]
We demonstrate a set of modifications to the structure of a Transformer layer, producing a more efficient architecture.
We add a convolutional module to complement the self-attention module, decoupling the learning of local and global interactions.
We apply the resulting architecture to language representation learning and demonstrate its superior performance compared to BERT models of different scales.
arXiv Detail & Related papers (2021-06-10T15:41:53Z) - Coreference Resolution without Span Representations [20.84150608402576]
We introduce a lightweight coreference model that removes the dependency on span representations, handcrafted features, and NLPs.
Our model performs competitively with the current end-to-end model, while being simpler and more efficient.
arXiv Detail & Related papers (2021-01-02T11:46:51Z) - Discontinuous Constituent Parsing with Pointer Networks [0.34376560669160383]
discontinuous constituent trees are crucial for representing all grammatical phenomena of languages such as German.
Recent advances in dependency parsing have shown that Pointer Networks excel in efficiently parsing syntactic relations between words in a sentence.
We propose a novel neural network architecture that is able to generate the most accurate discontinuous constituent representations.
arXiv Detail & Related papers (2020-02-05T15:12:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.