Enhancing RWKV-based Language Models for Long-Sequence Text Generation
- URL: http://arxiv.org/abs/2502.15485v2
- Date: Mon, 24 Feb 2025 14:30:32 GMT
- Title: Enhancing RWKV-based Language Models for Long-Sequence Text Generation
- Authors: Xinghan Pan,
- Abstract summary: This paper introduces an enhanced RWKV architecture with adaptive temporal gating mechanisms for improved long-context language modeling.<n>We propose two principal innovations: (1) a position-aware convolutional shift operator that captures local syntactic patterns while preserving global coherence, and (2) a neurally-gated information routing mechanism that dynamically regulates inter-token information flow.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces an enhanced RWKV architecture with adaptive temporal gating mechanisms for improved long-context language modeling. We propose two principal innovations: (1) a position-aware convolutional shift operator that captures local syntactic patterns while preserving global coherence, and (2) a neurally-gated information routing mechanism that dynamically regulates inter-token information flow. Through comprehensive experiments on text generation tasks, our enhanced model demonstrates superior performance compared to the baseline RWKV, achieving 96.5 relative improvement in ROUGE-L scores with only 2.95 increased inference latency. Ablation studies validate the individual contributions of each component, while linguistic analysis reveals the model's adaptive attention to syntactic boundaries and entity coherence. The proposed modifications maintain RWKV's linear computational complexity while significantly enhancing its contextual modeling capabilities, establishing new state-of-the-art performance for recurrent-style architectures in long-form text generation.
Related papers
- Integrating Textual Embeddings from Contrastive Learning with Generative Recommender for Enhanced Personalization [8.466223794246261]
We propose a hybrid framework that augments the generative recommender with contrastive text embedding model.
We evaluate our method on two domains from the Amazon Reviews 2023 dataset.
arXiv Detail & Related papers (2025-04-13T15:23:00Z) - ReCoM: Realistic Co-Speech Motion Generation with Recurrent Embedded Transformer [58.49950218437718]
We present ReCoM, an efficient framework for generating high-fidelity and generalizable human body motions synchronized with speech.
The core innovation lies in the Recurrent Embedded Transformer (RET), which integrates Dynamic Embedding Regularization (DER) into a Vision Transformer (ViT) core architecture.
To enhance model robustness, we incorporate the proposed DER strategy, which equips the model with dual capabilities of noise resistance and cross-domain generalization.
arXiv Detail & Related papers (2025-03-27T16:39:40Z) - FELLE: Autoregressive Speech Synthesis with Token-Wise Coarse-to-Fine Flow Matching [51.32059240975148]
FELLE is an autoregressive model that integrates language modeling with token-wise flow matching.
For each continuous-valued token, FELLE modifies the general prior distribution in flow matching by incorporating information from the previous step.
FELLE generates continuous-valued tokens hierarchically, conditioned on the language model's output.
arXiv Detail & Related papers (2025-02-16T13:54:32Z) - Context-Preserving Gradient Modulation for Large Language Models: A Novel Approach to Semantic Consistency in Long-Form Text Generation [0.19791587637442667]
A novel modulation gradient approach is introduced to adjust parameter updates dynamically in response to contextual relevance.
The proposed method enhances the stability of model-generated narratives without imposing significant computational overhead.
arXiv Detail & Related papers (2025-02-05T22:13:06Z) - Structural Embedding Projection for Contextual Large Language Model Inference [0.0]
Structured embedding transformations offer a promising approach for enhancing the efficiency and coherence of language model inference.
The mathematical formulation of Structural Embedding Projection (SEP) enables embedding spaces to capture structured contextual relationships.
The impact of SEP on lexical diversity suggested that embedding modifications influenced the model's vocabulary usage.
arXiv Detail & Related papers (2025-01-31T00:46:21Z) - GASE: Generatively Augmented Sentence Encoding [0.0]
We propose an approach to enhance sentence embeddings by applying generative text models for data augmentation at inference time.
Generatively Augmented Sentence uses diverse synthetic variants of input texts generated by paraphrasing, summarising or extracting keywords.
We find that generative augmentation leads to larger performance improvements for embedding models with lower baseline performance.
arXiv Detail & Related papers (2024-11-07T17:53:47Z) - Exploring Precision and Recall to assess the quality and diversity of LLMs [82.21278402856079]
We introduce a novel evaluation framework for Large Language Models (LLMs) such as textscLlama-2 and textscMistral.
This approach allows for a nuanced assessment of the quality and diversity of generated text without the need for aligned corpora.
arXiv Detail & Related papers (2024-02-16T13:53:26Z) - A Cross-Attention Augmented Model for Event-Triggered Context-Aware
Story Generation [28.046803293933213]
We introduce a novel neural generation model, EtriCA, that enhances the relevance and coherence of generated stories.
We employ a post-training framework for knowledge enhancement (KeEtriCA) on a large-scale book corpus.
This results in approximately 5% improvement in automatic metrics and over 10% improvement in human evaluation.
arXiv Detail & Related papers (2023-11-19T08:54:47Z) - RenAIssance: A Survey into AI Text-to-Image Generation in the Era of
Large Model [93.8067369210696]
Text-to-image generation (TTI) refers to the usage of models that could process text input and generate high fidelity images based on text descriptions.
Diffusion models are one prominent type of generative model used for the generation of images through the systematic introduction of noises with repeating steps.
In the era of large models, scaling up model size and the integration with large language models have further improved the performance of TTI models.
arXiv Detail & Related papers (2023-09-02T03:27:20Z) - Learning Evaluation Models from Large Language Models for Sequence
Generation [44.22820310679188]
Large language models achieve state-of-the-art performance on sequence generation evaluation, but typically have a large number of parameters.
We propose textbfECT, an textbfevaluation textbfcapability textbftransfer method, to transfer the evaluation capability from LLMs to relatively lightweight language models.
Based on the proposed ECT, we learn various evaluation models from ChatGPT, and employ them as reward models to improve sequence generation models.
arXiv Detail & Related papers (2023-08-08T16:41:16Z) - (Psycho-)Linguistic Features Meet Transformer Models for Improved
Explainable and Controllable Text Simplification [31.64341800095214]
We aim to advance current research on explainable and controllable TS.
We use a large set of (psycho-)linguistic features in combination with pre-trained language models to improve explainable complexity prediction.
We extend a state-of-the-art Seq2Seq TS model, ACCESS, to enable explicit control of ten attributes.
arXiv Detail & Related papers (2022-12-19T20:46:21Z) - Model Criticism for Long-Form Text Generation [113.13900836015122]
We apply a statistical tool, model criticism in latent space, to evaluate the high-level structure of generated text.
We perform experiments on three representative aspects of high-level discourse -- coherence, coreference, and topicality.
We find that transformer-based language models are able to capture topical structures but have a harder time maintaining structural coherence or modeling coreference.
arXiv Detail & Related papers (2022-10-16T04:35:58Z) - Improving Distantly Supervised Relation Extraction by Natural Language
Inference [9.181270251524866]
We propose a novel DSRE-NLI framework, which considers both distant supervision from existing knowledge bases and indirect supervision from pretrained language models for other tasks.
DSRE-NLI energizes an off-the-shelf natural language inference (NLI) engine with a semi-automatic relation verbalization (SARV) mechanism to provide indirect supervision.
With two simple and effective data consolidation strategies, the quality of training data is substantially improved.
arXiv Detail & Related papers (2022-07-31T02:48:34Z) - RSTGen: Imbuing Fine-Grained Interpretable Control into Long-FormText
Generators [26.27412809287025]
RSTGen is a framework that controls the discourse structure, semantics and topics of generated text.
We demonstrate our model's ability to control structural discourse and semantic features of generated text in open generation evaluation.
arXiv Detail & Related papers (2022-05-25T09:06:04Z) - Non-Autoregressive Text Generation with Pre-trained Language Models [40.50508206201288]
We show that BERT can be employed as the backbone of a NAG model to greatly improve performance.
We devise mechanisms to alleviate the two common problems of vanilla NAG models.
We propose a new decoding strategy, ratio-first, for applications where the output lengths can be approximately estimated beforehand.
arXiv Detail & Related papers (2021-02-16T15:30:33Z) - Enriching Non-Autoregressive Transformer with Syntactic and
SemanticStructures for Neural Machine Translation [54.864148836486166]
We propose to incorporate the explicit syntactic and semantic structures of languages into a non-autoregressive Transformer.
Our model achieves a significantly faster speed, as well as keeps the translation quality when compared with several state-of-the-art non-autoregressive models.
arXiv Detail & Related papers (2021-01-22T04:12:17Z) - Improving Text Generation with Student-Forcing Optimal Transport [122.11881937642401]
We propose using optimal transport (OT) to match the sequences generated in training and testing modes.
An extension is also proposed to improve the OT learning, based on the structural and contextual information of the text sequences.
The effectiveness of the proposed method is validated on machine translation, text summarization, and text generation tasks.
arXiv Detail & Related papers (2020-10-12T19:42:25Z) - Lightweight, Dynamic Graph Convolutional Networks for AMR-to-Text
Generation [56.73834525802723]
Lightweight Dynamic Graph Convolutional Networks (LDGCNs) are proposed.
LDGCNs capture richer non-local interactions by synthesizing higher order information from the input graphs.
We develop two novel parameter saving strategies based on the group graph convolutions and weight tied convolutions to reduce memory usage and model complexity.
arXiv Detail & Related papers (2020-10-09T06:03:46Z) - Improving Adversarial Text Generation by Modeling the Distant Future [155.83051741029732]
We consider a text planning scheme and present a model-based imitation-learning approach to alleviate the aforementioned issues.
We propose a novel guider network to focus on the generative process over a longer horizon, which can assist next-word prediction and provide intermediate rewards for generator optimization.
arXiv Detail & Related papers (2020-05-04T05:45:13Z) - Improve Variational Autoencoder for Text Generationwith Discrete Latent
Bottleneck [52.08901549360262]
Variational autoencoders (VAEs) are essential tools in end-to-end representation learning.
VAEs tend to ignore latent variables with a strong auto-regressive decoder.
We propose a principled approach to enforce an implicit latent feature matching in a more compact latent space.
arXiv Detail & Related papers (2020-04-22T14:41:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.