Reinforced Fast Weights with Next-Sequence Prediction
- URL: http://arxiv.org/abs/2602.16704v1
- Date: Wed, 18 Feb 2026 18:53:18 GMT
- Title: Reinforced Fast Weights with Next-Sequence Prediction
- Authors: Hee Seung Hwang, Xindi Wu, Sanghyuk Chun, Olga Russakovsky,
- Abstract summary: REFINE is a reinforcement learning framework that trains fast weight models under the next-sequence prediction (NSP) objective.<n> REFINE consistently outperforms supervised fine-tuning with NTP across needle-in-a-haystack retrieval, long-context question answering, and diverse tasks in LongBench.
- Score: 42.710296902935426
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fast weight architectures offer a promising alternative to attention-based transformers for long-context modeling by maintaining constant memory overhead regardless of context length. However, their potential is limited by the next-token prediction (NTP) training paradigm. NTP optimizes single-token predictions and ignores semantic coherence across multiple tokens following a prefix. Consequently, fast weight models, which dynamically update their parameters to store contextual information, learn suboptimal representations that fail to capture long-range dependencies. We introduce REFINE (Reinforced Fast weIghts with Next sEquence prediction), a reinforcement learning framework that trains fast weight models under the next-sequence prediction (NSP) objective. REFINE selects informative token positions based on prediction entropy, generates multi-token rollouts, assigns self-supervised sequence-level rewards, and optimizes the model with group relative policy optimization (GRPO). REFINE is applicable throughout the training lifecycle of pre-trained language models: mid-training, post-training, and test-time training. Our experiments on LaCT-760M and DeltaNet-1.3B demonstrate that REFINE consistently outperforms supervised fine-tuning with NTP across needle-in-a-haystack retrieval, long-context question answering, and diverse tasks in LongBench. REFINE provides an effective and versatile framework for improving long-context modeling in fast weight architectures.
Related papers
- Predicting LLM Output Length via Entropy-Guided Representations [13.351384070796747]
We introduce a lightweight framework that reuses the main model's internal hidden states for efficient length prediction.<n>Our framework features two core components: 1) Entropy-Guided Token Pooling (EGTP), which uses on-the-fly activations and token entropy for highly accurate static prediction.
arXiv Detail & Related papers (2026-02-12T10:49:04Z) - Next Concept Prediction in Discrete Latent Space Leads to Stronger Language Models [62.054835560934066]
Next Concept Prediction is a generative pretraining paradigm built on top of Next Token Prediction.<n>Our model, ConceptLM, quantizes hidden states using Vector Quantization and constructs a concept vocabulary.<n>Results on 13 benchmarks show that NCP yields consistent performance gains over traditional token-level models.
arXiv Detail & Related papers (2026-02-09T18:33:31Z) - Context-level Language Modeling by Learning Predictive Context Embeddings [79.00607069677393]
We introduce textbfContextLM, a framework that augments standard pretraining with an inherent textbfnext-context prediction objective.<n>This mechanism trains the model to learn predictive representations of multi-token contexts, leveraging error signals derived from future token chunks.<n>Experiments on the GPT2 and Pythia model families, scaled up to $1.5$B parameters, show that ContextLM delivers consistent improvements in both perplexity and downstream task performance.
arXiv Detail & Related papers (2025-10-23T07:09:45Z) - Beyond Multi-Token Prediction: Pretraining LLMs with Future Summaries [35.39150917025755]
Future summary prediction (FSP) trains an auxiliary head to predict a compact representation of the long-term future.<n>FSP provides improvements over both NTP and MTP across math, reasoning, and coding benchmarks.
arXiv Detail & Related papers (2025-10-16T14:52:52Z) - Holdout-Loss-Based Data Selection for LLM Finetuning via In-Context Learning [19.677969862434708]
We present a theoretically grounded, resource-efficient framework for data selection and reweighting.<n>At its core is an In-Context Approximation (ICA) that estimates the holdout loss a model would incur after training on a candidate example.<n>We derive per-example weights from ICA scores, dynamically reweighting gradient updates as model parameters evolve.
arXiv Detail & Related papers (2025-10-16T09:00:39Z) - TACTiS-2: Better, Faster, Simpler Attentional Copulas for Multivariate Time Series [57.4208255711412]
Building on copula theory, we propose a simplified objective for the recently-introduced transformer-based attentional copulas (TACTiS)
We show that the resulting model has significantly better training dynamics and achieves state-of-the-art performance across diverse real-world forecasting tasks.
arXiv Detail & Related papers (2023-10-02T16:45:19Z) - Contextually Enhanced ES-dRNN with Dynamic Attention for Short-Term Load
Forecasting [1.1602089225841632]
The proposed model is composed of two simultaneously trained tracks: the context track and the main track.
The RNN architecture consists of multiple recurrent layers stacked with hierarchical dilations and equipped with recently proposed attentive recurrent cells.
The model produces both point forecasts and predictive intervals.
arXiv Detail & Related papers (2022-12-18T07:42:48Z) - Fine-Tuning Pre-Trained Language Models Effectively by Optimizing
Subnetworks Adaptively [32.001304911395756]
We propose a Dynamic Selection (DPS) algorithm for the large-scale pre-trained models during fine-tuning.
Experiments on the GLUE benchmark show that DPS outperforms previous fine-tuning methods in terms of overall performance and stability.
arXiv Detail & Related papers (2022-11-03T08:32:12Z) - Adversarial Self-Attention for Language Understanding [89.265747130584]
This paper proposes textitAdversarial Self-Attention mechanism (ASA).
ASA adversarially reconstructs the Transformer attentions and facilitates model training from contaminated model structures.
For fine-tuning, ASA-empowered models consistently outweigh naive models by a large margin considering both generalization and robustness.
arXiv Detail & Related papers (2022-06-25T09:18:10Z) - DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language
Models [152.29364079385635]
As pre-trained models grow bigger, the fine-tuning process can be time-consuming and computationally expensive.
We propose a framework for resource- and parameter-efficient fine-tuning by leveraging the sparsity prior in both weight updates and the final model weights.
Our proposed framework, dubbed Dually Sparsity-Embedded Efficient Tuning (DSEE), aims to achieve two key objectives: (i) parameter efficient fine-tuning and (ii) resource-efficient inference.
arXiv Detail & Related papers (2021-10-30T03:29:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.