ResFormer: All-Time Reservoir Memory for Long Sequence Classification
- URL: http://arxiv.org/abs/2509.24074v1
- Date: Sun, 28 Sep 2025 21:20:49 GMT
- Title: ResFormer: All-Time Reservoir Memory for Long Sequence Classification
- Authors: Hongbo Liu, Jia Xu,
- Abstract summary: Sequence classification is essential in NLP for understanding and categorizing language patterns in tasks like sentiment analysis, intent detection, and topic classification.<n> Transformer-based models, despite achieving state-of-the-art performance, have inherent limitations due to quadratic time and memory complexity.<n>We propose ResFormer, a novel neural network architecture designed to model varying context lengths efficiently through a cascaded methodology.
- Score: 4.298381633106637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sequence classification is essential in NLP for understanding and categorizing language patterns in tasks like sentiment analysis, intent detection, and topic classification. Transformer-based models, despite achieving state-of-the-art performance, have inherent limitations due to quadratic time and memory complexity, restricting their input length. Although extensive efforts have aimed at reducing computational demands, processing extensive contexts remains challenging. To overcome these limitations, we propose ResFormer, a novel neural network architecture designed to model varying context lengths efficiently through a cascaded methodology. ResFormer integrates an reservoir computing network featuring a nonlinear readout to effectively capture long-term contextual dependencies in linear time. Concurrently, short-term dependencies within sentences are modeled using a conventional Transformer architecture with fixed-length inputs. Experiments demonstrate that ResFormer significantly outperforms baseline models of DeepSeek-Qwen and ModernBERT, delivering an accuracy improvement of up to +22.3% on the EmoryNLP dataset and consistent gains on MultiWOZ, MELD, and IEMOCAP. In addition, ResFormer exhibits reduced memory consumption, underscoring its effectiveness and efficiency in modeling extensive contextual information.
Related papers
- SparseST: Exploiting Data Sparsity in Spatiotemporal Modeling and Prediction [17.919235390330595]
We develop a novel framework SparseST that in exploiting data sparsity to develop an efficient model.<n>We also explore and approximate the front between model performance and computational efficiency by designing a multi-objective composite loss function.
arXiv Detail & Related papers (2025-11-18T18:53:37Z) - Improving Long-term Autoregressive Spatiotemporal Predictions: A Proof of Concept with Fluid Dynamics [10.71350538032054]
For complex systems, long-term accuracy often deteriorates due to error accumulation.<n>We propose the PushForward framework, which retains one-step-ahead training while enabling multi-step learning.<n> SPF builds a supplementary dataset from model predictions and combines it with ground truth via an acquisition strategy.
arXiv Detail & Related papers (2025-08-25T23:51:18Z) - MesaNet: Sequence Modeling by Locally Optimal Test-Time Training [67.45211108321203]
We introduce a numerically stable, chunkwise parallelizable version of the recently proposed Mesa layer.<n>We show that optimal test-time training enables reaching lower language modeling perplexity and higher downstream benchmark performance than previous RNNs.
arXiv Detail & Related papers (2025-06-05T16:50:23Z) - LESA: Learnable LLM Layer Scaling-Up [57.0510934286449]
Training Large Language Models (LLMs) from scratch requires immense computational resources, making it prohibitively expensive.<n>Model scaling-up offers a promising solution by leveraging the parameters of smaller models to create larger ones.<n>We propose textbfLESA, a novel learnable method for depth scaling-up.
arXiv Detail & Related papers (2025-02-19T14:58:48Z) - Efficient Knowledge Feeding to Language Models: A Novel Integrated Encoder-Decoder Architecture [0.0]
ICV recasts in-context learning by using latent embeddings of language models.<n>ICV directly integrates information into the model, enabling it to process this information more effectively.
arXiv Detail & Related papers (2025-02-07T04:24:07Z) - Stuffed Mamba: Oversized States Lead to the Inability to Forget [53.512358993801115]
We show that Mamba-based models struggle to effectively forget earlier tokens even with built-in forgetting mechanisms.<n>We show that the minimum training length required for the model to learn forgetting scales linearly with the state size, and the maximum context length for accurate retrieval of a 5-digit passkey scales exponentially with the state size.<n>Our work suggests that future RNN designs must account for the interplay between state size, training length, and forgetting mechanisms to achieve robust performance in long-context tasks.
arXiv Detail & Related papers (2024-10-09T17:54:28Z) - SHERL: Synthesizing High Accuracy and Efficient Memory for Resource-Limited Transfer Learning [63.93193829913252]
We propose an innovative METL strategy called SHERL for resource-limited scenarios.
In the early route, intermediate outputs are consolidated via an anti-redundancy operation.
In the late route, utilizing minimal late pre-trained layers could alleviate the peak demand on memory overhead.
arXiv Detail & Related papers (2024-07-10T10:22:35Z) - LongVQ: Long Sequence Modeling with Vector Quantization on Structured Memory [63.41820940103348]
Self-attention mechanism's computational cost limits its practicality for long sequences.
We propose a new method called LongVQ to compress the global abstraction as a length-fixed codebook.
LongVQ effectively maintains dynamic global and local patterns, which helps to complement the lack of long-range dependency issues.
arXiv Detail & Related papers (2024-04-17T08:26:34Z) - Enhancing Transformer RNNs with Multiple Temporal Perspectives [18.884124657093405]
We introduce the concept of multiple temporal perspectives, a novel approach applicable to Recurrent Neural Network (RNN) architectures.
This method involves maintaining diverse temporal views of previously encountered text, significantly enriching the language models' capacity to interpret context.
arXiv Detail & Related papers (2024-02-04T22:12:29Z) - Confident Adaptive Language Modeling [95.45272377648773]
CALM is a framework for dynamically allocating different amounts of compute per input and generation timestep.
We demonstrate the efficacy of our framework in reducing compute -- potential speedup of up to $times 3$ -- while provably maintaining high performance.
arXiv Detail & Related papers (2022-07-14T17:00:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.