Introducing Context Information in Lifelong Sequential Modeling using Temporal Convolutional Networks
- URL: http://arxiv.org/abs/2502.12634v1
- Date: Tue, 18 Feb 2025 08:24:53 GMT
- Title: Introducing Context Information in Lifelong Sequential Modeling using Temporal Convolutional Networks
- Authors: Ting Guo, Zhaoyang Yang, Qinsong Zeng, Ming Chen,
- Abstract summary: We introduce a novel network which employs the Temporal Convolutional Network (TCN) to generate context-aware representations for each item throughout the lifelong sequence.
We also incorporate a lightweight sub-network to create convolution filters based on users' basic profile features.
The findings indicate that the proposed network surpasses existing methods in terms of prediction accuracy and online performance metrics.
- Score: 4.561273938467592
- License:
- Abstract: The importance of lifelong sequential modeling (LSM) is growing in the realm of social media recommendation systems. A key component in this process is the attention module, which derives interest representations with respect to candidate items from the sequence. Typically, attention modules function in a point-wise fashion, concentrating only on the relevance of individual items in the sequence to the candidate item. However, the context information in the neighboring items that is useful for more accurately evaluating the significance of each item has not been taken into account. In this study, we introduce a novel network which employs the Temporal Convolutional Network (TCN) to generate context-aware representations for each item throughout the lifelong sequence. These improved representations are then utilized in the attention module to produce context-aware interest representations. Expanding on this TCN framework, we present a enhancement module which includes multiple TCN layers and their respective attention modules to capture interest representations across different context scopes. Additionally, we also incorporate a lightweight sub-network to create convolution filters based on users' basic profile features. These personalized filters are then applied in the TCN layers instead of the original global filters to produce more user-specific representations. We performed experiments on both a public dataset and a proprietary dataset. The findings indicate that the proposed network surpasses existing methods in terms of prediction accuracy and online performance metrics.
Related papers
- Multi-granularity Interest Retrieval and Refinement Network for Long-Term User Behavior Modeling in CTR Prediction [68.90783662117936]
Click-through Rate (CTR) prediction is crucial for online personalization platforms.
Recent advancements have shown that modeling rich user behaviors can significantly improve the performance of CTR prediction.
We propose Multi-granularity Interest Retrieval and Refinement Network (MIRRN)
arXiv Detail & Related papers (2024-11-22T15:29:05Z) - Learning Partially Aligned Item Representation for Cross-Domain Sequential Recommendation [72.73379646418435]
Cross-domain sequential recommendation aims to uncover and transfer users' sequential preferences across domains.
misaligned item representations can potentially lead to sub-optimal sequential modeling and user representation alignment.
We propose a model-agnostic framework called textbfCross-domain item representation textbfAlignment for textbfCross-textbfDomain textbfSequential textbfRecommendation.
arXiv Detail & Related papers (2024-05-21T03:25:32Z) - RefSAM: Efficiently Adapting Segmenting Anything Model for Referring Video Object Segmentation [53.4319652364256]
This paper presents the RefSAM model, which explores the potential of SAM for referring video object segmentation.
Our proposed approach adapts the original SAM model to enhance cross-modality learning by employing a lightweight Cross-RValModal.
We employ a parameter-efficient tuning strategy to align and fuse the language and vision features effectively.
arXiv Detail & Related papers (2023-07-03T13:21:58Z) - OST: Efficient One-stream Network for 3D Single Object Tracking in Point Clouds [6.661881950861012]
We propose a novel one-stream network with the strength of the instance-level encoding, which avoids the correlation operations occurring in previous Siamese network.
The proposed method has achieved considerable performance not only for class-specific tracking but also for class-agnostic tracking with less computation and higher efficiency.
arXiv Detail & Related papers (2022-10-16T12:31:59Z) - VLSNR:Vision-Linguistics Coordination Time Sequence-aware News
Recommendation [0.0]
multimodal semantics is beneficial for enhancing the comprehension of users' temporal and long-lasting interests.
In our work, we propose a vision-linguistics coordinate time sequence news recommendation.
We also construct a large scale multimodal news recommendation dataset V-MIND.
arXiv Detail & Related papers (2022-10-06T14:27:37Z) - Dynamic Prototype Convolution Network for Few-Shot Semantic Segmentation [33.93192093090601]
Key challenge for few-shot semantic segmentation (FSS) is how to tailor a desirable interaction among support and query features.
We propose a prototype prototype convolution network (DPCN) to fully capture the intrinsic details for accurate FSS.
Our DPCN is also flexible and efficient under the k-shot FSS setting.
arXiv Detail & Related papers (2022-04-22T11:12:37Z) - IA-GCN: Interactive Graph Convolutional Network for Recommendation [13.207235494649343]
Graph Convolutional Network (GCN) has become a novel state-of-the-art for Collaborative Filtering (CF) based Recommender Systems (RS)
We build bilateral interactive guidance between each user-item pair and propose a new model named IA-GCN (short for InterActive GCN)
Our model is built on top of LightGCN, a state-of-the-art GCN model for CF, and can be combined with various GCN-based CF architectures in an end-to-end fashion.
arXiv Detail & Related papers (2022-04-08T03:38:09Z) - Learning Target-aware Representation for Visual Tracking via Informative
Interactions [49.552877881662475]
We introduce a novel backbone architecture to improve target-perception ability of feature representation for tracking.
The proposed GIM module and InBN mechanism are general and applicable to different backbone types including CNN and Transformer.
arXiv Detail & Related papers (2022-01-07T16:22:27Z) - ContextNet: A Click-Through Rate Prediction Framework Using Contextual
information to Refine Feature Embedding [2.146541845019669]
We propose a novel CTR Framework named ContextNet that implicitly models high-order feature interactions.
We conduct extensive experiments on four real-world datasets and the experiment results demonstrate that our proposed ContextNet-PFFN and ContextNet-SFFN model outperform state-of-the-art models such as DeepFM and xDeepFM significantly.
arXiv Detail & Related papers (2021-07-26T08:29:40Z) - Multi-Granularity Reference-Aided Attentive Feature Aggregation for
Video-based Person Re-identification [98.7585431239291]
Video-based person re-identification aims at matching the same person across video clips.
In this paper, we propose an attentive feature aggregation module, namely Multi-Granularity Reference-Attentive Feature aggregation module MG-RAFA.
Our framework achieves the state-of-the-art ablation performance on three benchmark datasets.
arXiv Detail & Related papers (2020-03-27T03:49:21Z) - Global Context-Aware Progressive Aggregation Network for Salient Object
Detection [117.943116761278]
We propose a novel network named GCPANet to integrate low-level appearance features, high-level semantic features, and global context features.
We show that the proposed approach outperforms the state-of-the-art methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2020-03-02T04:26:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.