HyTRec: A Hybrid Temporal-Aware Attention Architecture for Long Behavior Sequential Recommendation
- URL: http://arxiv.org/abs/2602.18283v1
- Date: Fri, 20 Feb 2026 15:11:40 GMT
- Title: HyTRec: A Hybrid Temporal-Aware Attention Architecture for Long Behavior Sequential Recommendation
- Authors: Lei Xin, Yuhao Zheng, Ke Cheng, Changjiang Jiang, Zifan Zhang, Fanhu Zeng,
- Abstract summary: HyTRec is a model featuring a Hybrid Attention architecture that decouples long-term stable preferences from short-term intent spikes.<n>Our approach restores precise retrieval capabilities within industrial-scale contexts involving ten thousand interactions.<n> Empirical results on industrial-scale datasets confirm the superiority that our model maintains linear inference speed and outperforms strong baselines.
- Score: 5.1321456889159425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling long sequences of user behaviors has emerged as a critical frontier in generative recommendation. However, existing solutions face a dilemma: linear attention mechanisms achieve efficiency at the cost of retrieval precision due to limited state capacity, while softmax attention suffers from prohibitive computational overhead. To address this challenge, we propose HyTRec, a model featuring a Hybrid Attention architecture that explicitly decouples long-term stable preferences from short-term intent spikes. By assigning massive historical sequences to a linear attention branch and reserving a specialized softmax attention branch for recent interactions, our approach restores precise retrieval capabilities within industrial-scale contexts involving ten thousand interactions. To mitigate the lag in capturing rapid interest drifts within the linear layers, we furthermore design Temporal-Aware Delta Network (TADN) to dynamically upweight fresh behavioral signals while effectively suppressing historical noise. Empirical results on industrial-scale datasets confirm the superiority that our model maintains linear inference speed and outperforms strong baselines, notably delivering over 8% improvement in Hit Rate for users with ultra-long sequences with great efficiency.
Related papers
- FuXi-Linear: Unleashing the Power of Linear Attention in Long-term Time-aware Sequential Recommendation [86.55349738440087]
FuXi-Linear is a linear-complexity model designed for efficient long-sequence recommendation.<n>Our approach introduces two key components: (1) a Temporal Retention Channel that independently computes periodic attention weights using temporal data, preventing crosstalk between temporal and semantic signals; and (2) a Linear Positional Channel that integrates positional information through learnable kernels within linear complexity.
arXiv Detail & Related papers (2026-02-27T04:38:28Z) - GEMs: Breaking the Long-Sequence Barrier in Generative Recommendation with a Multi-Stream Decoder [54.64137490632567]
We propose a novel and unified framework designed to capture users' sequences from long-term history.<n>Generative Multi-streamers ( GEMs) break user sequences into three streams.<n>Extensive experiments on large-scale industrial datasets demonstrate that GEMs significantly outperforms state-the-art methods in recommendation accuracy.
arXiv Detail & Related papers (2026-02-14T06:42:56Z) - Gated Rotary-Enhanced Linear Attention for Long-term Sequential Recommendation [14.581838243440922]
We propose a long-term sequential Recommendation model with Gated Rotary Enhanced Linear Attention (RecGRELA)<n> Specifically, we propose a Rotary-Enhanced Linear Attention (RELA) module to efficiently model long-range dependency.<n>We also introduce a SiLU-based Gated mechanism for RELA to help the model tell if a user behavior shows a short-term, local interest or a real change in their long-term tastes.
arXiv Detail & Related papers (2025-06-16T09:56:10Z) - Breaking the Context Bottleneck on Long Time Series Forecasting [10.715175460720403]
Long-term time-series forecasting is essential for planning and decision-making in economics, energy, and transportation.<n>Recent advancements have enhanced the efficiency of these models, but the challenge of effectively leveraging longer sequences persists.<n>We propose the Logsparse Decomposable Multiscaling (LDM) framework for the efficient and effective processing of long sequences.
arXiv Detail & Related papers (2024-12-21T10:29:34Z) - Oscillatory State-Space Models [61.923849241099184]
We propose Lineary State-Space models (LinOSS) for efficiently learning on long sequences.<n>A stable discretization, integrated over time using fast associative parallel scans, yields the proposed state-space model.<n>We show that LinOSS is universal, i.e., it can approximate any continuous and causal operator mapping between time-varying functions.
arXiv Detail & Related papers (2024-10-04T22:00:13Z) - Long-Sequence Recommendation Models Need Decoupled Embeddings [49.410906935283585]
We identify and characterize a neglected deficiency in existing long-sequence recommendation models.<n>A single set of embeddings struggles with learning both attention and representation, leading to interference between these two processes.<n>We propose the Decoupled Attention and Representation Embeddings (DARE) model, where two distinct embedding tables are learned separately to fully decouple attention and representation.
arXiv Detail & Related papers (2024-10-03T15:45:15Z) - ELASTIC: Efficient Linear Attention for Sequential Interest Compression [5.689306819772134]
State-of-the-art sequential recommendation models heavily rely on transformer's attention mechanism.<n>We propose ELASTIC, an Efficient Linear Attention for SequenTial Interest Compression.<n>We conduct extensive experiments on various public datasets and compare it with several strong sequential recommenders.
arXiv Detail & Related papers (2024-08-18T06:41:46Z) - Sparser is Faster and Less is More: Efficient Sparse Attention for Long-Range Transformers [58.5711048151424]
We introduce SPARSEK Attention, a novel sparse attention mechanism designed to overcome computational and memory obstacles.
Our approach integrates a scoring network and a differentiable top-k mask operator, SPARSEK, to select a constant number of KV pairs for each query.
Experimental results reveal that SPARSEK Attention outperforms previous sparse attention methods.
arXiv Detail & Related papers (2024-06-24T15:55:59Z) - CItruS: Chunked Instruction-aware State Eviction for Long Sequence Modeling [52.404072802235234]
We introduce Chunked Instruction-aware State Eviction (CItruS), a novel modeling technique that integrates the attention preferences useful for a downstream task into the eviction process of hidden states.
Our training-free method exhibits superior performance on long sequence comprehension and retrieval tasks over several strong baselines under the same memory budget.
arXiv Detail & Related papers (2024-06-17T18:34:58Z) - Dynamic Memory based Attention Network for Sequential Recommendation [79.5901228623551]
We propose a novel long sequential recommendation model called Dynamic Memory-based Attention Network (DMAN)
It segments the overall long behavior sequence into a series of sub-sequences, then trains the model and maintains a set of memory blocks to preserve long-term interests of users.
Based on the dynamic memory, the user's short-term and long-term interests can be explicitly extracted and combined for efficient joint recommendation.
arXiv Detail & Related papers (2021-02-18T11:08:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.