TLDR: Token Loss Dynamic Reweighting for Reducing Repetitive Utterance
Generation
- URL: http://arxiv.org/abs/2003.11963v2
- Date: Thu, 9 Apr 2020 09:59:25 GMT
- Title: TLDR: Token Loss Dynamic Reweighting for Reducing Repetitive Utterance
Generation
- Authors: Shaojie Jiang, Thomas Wolf, Christof Monz, Maarten de Rijke
- Abstract summary: We study the repetition problem for encoder-decoder models, using both recurrent neural network (RNN) and transformer architectures.
By using higher weights for hard tokens and lower weights for easy tokens, NLG models are able to learn individual tokens at different paces.
- Score: 52.3803408133162
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Natural Language Generation (NLG) models are prone to generating repetitive
utterances. In this work, we study the repetition problem for encoder-decoder
models, using both recurrent neural network (RNN) and transformer
architectures. To this end, we consider the chit-chat task, where the problem
is more prominent than in other tasks that need encoder-decoder architectures.
We first study the influence of model architectures. By using pre-attention and
highway connections for RNNs, we manage to achieve lower repetition rates.
However, this method does not generalize to other models such as transformers.
We hypothesize that the deeper reason is that in the training corpora, there
are hard tokens that are more difficult for a generative model to learn than
others and, once learning has finished, hard tokens are still under-learned, so
that repetitive generations are more likely to happen. Based on this
hypothesis, we propose token loss dynamic reweighting (TLDR) that applies
differentiable weights to individual token losses. By using higher weights for
hard tokens and lower weights for easy tokens, NLG models are able to learn
individual tokens at different paces. Experiments on chit-chat benchmark
datasets show that TLDR is more effective in repetition reduction for both RNN
and transformer architectures than baselines using different weighting
functions.
Related papers
- TL;DR: Too Long, Do Re-weighting for Efficient LLM Reasoning Compression [55.37723860832064]
We propose a dynamic ratio-based training pipeline that does not rely on sophisticated data annotations.<n>We validate our approach across models on DeepSeek-R1-Distill-7B and DeepSeek-R1-Distill-14B and on a diverse set of benchmarks with varying difficulty levels.
arXiv Detail & Related papers (2025-06-03T09:23:41Z) - Regress, Don't Guess -- A Regression-like Loss on Number Tokens for Language Models [2.5464748274973026]
We present a regression-like loss that operates purely on token level.<n>Our proposed Number Token Loss (NTL) comes in two flavors and minimizes either the Lp norm or the Wasserstein distance.<n>We evaluate the proposed scheme on various mathematical datasets and find that it consistently improves performance in math-related tasks.
arXiv Detail & Related papers (2024-11-04T13:43:24Z) - Attention as an RNN [66.5420926480473]
We show that attention can be viewed as a special Recurrent Neural Network (RNN) with the ability to compute its textitmany-to-one RNN output efficiently.
We introduce a new efficient method of computing attention's textitmany-to-many RNN output based on the parallel prefix scan algorithm.
We show Aarens achieve comparable performance to Transformers on $38$ datasets spread across four popular sequential problem settings.
arXiv Detail & Related papers (2024-05-22T19:45:01Z) - SpikeGPT: Generative Pre-trained Language Model with Spiking Neural Networks [21.616328837090396]
Spiking Neural Networks (SNNs) leverage sparse and event-driven activations to reduce the computational overhead associated with model inference.
We implement generative language model with binary, event-driven spiking activation units.
SpikeGPT is the largest backpropagation-trained SNN model to date, rendering it suitable for both the generation and comprehension of natural language.
arXiv Detail & Related papers (2023-02-27T16:43:04Z) - Decomposing a Recurrent Neural Network into Modules for Enabling
Reusability and Replacement [11.591247347259317]
We propose the first approach to decompose an RNN into modules.
We study different types of RNNs, i.e., Vanilla, LSTM, and GRU.
We show how such RNN modules can be reused and replaced in various scenarios.
arXiv Detail & Related papers (2022-12-09T03:29:38Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - TSNAT: Two-Step Non-Autoregressvie Transformer Models for Speech
Recognition [69.68154370877615]
The non-autoregressive (NAR) models can get rid of the temporal dependency between the output tokens and predict the entire output tokens in at least one step.
To address these two problems, we propose a new model named the two-step non-autoregressive transformer(TSNAT)
The results show that the TSNAT can achieve a competitive performance with the AR model and outperform many complicated NAR models.
arXiv Detail & Related papers (2021-04-04T02:34:55Z) - Alignment Restricted Streaming Recurrent Neural Network Transducer [29.218353627837214]
We propose a modification to the RNN-T loss function and develop Alignment Restricted RNN-T models.
The Ar-RNN-T loss provides a refined control to navigate the trade-offs between the token emission delays and the Word Error Rate (WER)
The Ar-RNN-T models also improve downstream applications such as the ASR End-pointing by guaranteeing token emissions within any given range of latency.
arXiv Detail & Related papers (2020-11-05T19:38:54Z) - A Token-wise CNN-based Method for Sentence Compression [31.9210679048841]
Sentence compression is a Natural Language Processing (NLP) task aimed at shortening original sentences and preserving their key information.
Current methods are largely based on Recurrent Neural Network (RNN) models which suffer from poor processing speed.
We propose a token-wise Conal Neural Network, a CNN-based model along with pre-trained Bidirectional Representations from Transformers (BERT) features for deletion-based sentence compression.
arXiv Detail & Related papers (2020-09-23T17:12:06Z) - A Study of Non-autoregressive Model for Sequence Generation [147.89525760170923]
Non-autoregressive (NAR) models generate all the tokens of a sequence in parallel.
We propose knowledge distillation and source-target alignment to bridge the gap between AR and NAR models.
arXiv Detail & Related papers (2020-04-22T09:16:09Z) - Recognizing Long Grammatical Sequences Using Recurrent Networks
Augmented With An External Differentiable Stack [73.48927855855219]
Recurrent neural networks (RNNs) are a widely used deep architecture for sequence modeling, generation, and prediction.
RNNs generalize poorly over very long sequences, which limits their applicability to many important temporal processing and time series forecasting problems.
One way to address these shortcomings is to couple an RNN with an external, differentiable memory structure, such as a stack.
In this paper, we improve the memory-augmented RNN with important architectural and state updating mechanisms.
arXiv Detail & Related papers (2020-04-04T14:19:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.