Empowering General-purpose User Representation with Full-life Cycle
Behavior Modeling
- URL: http://arxiv.org/abs/2110.11337v4
- Date: Wed, 12 Jul 2023 08:48:42 GMT
- Title: Empowering General-purpose User Representation with Full-life Cycle
Behavior Modeling
- Authors: Bei Yang, Jie Gu, Ke Liu, Xiaoxiao Xu, Renjun Xu, Qinghui Sun, Hong
Liu
- Abstract summary: We propose a novel framework called full- Life cycle User Representation Model (LURM) to tackle this challenge.
LURM consists of two cascaded sub-models: (I) Bag-of-Interests (BoI) encodes user behaviors in any time period into a sparse vector with super-high dimension (e.g., 105)
SMEN achieves almost dimensionality reduction, benefiting from a novel multi-anchor module which can learn different aspects of user interests.
- Score: 11.698166058448555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: User Modeling plays an essential role in industry. In this field,
task-agnostic approaches, which generate general-purpose representation
applicable to diverse downstream user cognition tasks, is a promising direction
being more valuable and economical than task-specific representation learning.
With the rapid development of Internet service platforms, user behaviors have
been accumulated continuously. However, existing general-purpose user
representation researches have little ability for full-life cycle modeling on
extremely long behavior sequences since user registration. In this study, we
propose a novel framework called full- Life cycle User Representation Model
(LURM) to tackle this challenge. Specifically, LURM consists of two cascaded
sub-models: (I) Bag-of-Interests (BoI) encodes user behaviors in any time
period into a sparse vector with super-high dimension (e.g., 10^5); (II)
Self-supervised Multi-anchor Encoder Network (SMEN) maps sequences of BoI
features to multiple low-dimensional user representations. Specially, SMEN
achieves almost lossless dimensionality reduction, benefiting from a novel
multi-anchor module which can learn different aspects of user interests.
Experiments on several benchmark datasets show that our approach outperforms
state-of-the-art general-purpose representation methods.
Related papers
- Adaptive Learning on User Segmentation: Universal to Specific Representation via Bipartite Neural Interaction [15.302921887305283]
We propose a novel learning framework that can first learn general universal user representation through information bottleneck.
Then, merge and learn a segmentation-specific or a task-specific representation through neural interaction.
Our proposed method is evaluated in two open-source benchmarks, two offline business datasets, and deployed on two online marketing applications to predict users' CVR.
arXiv Detail & Related papers (2024-09-23T12:02:23Z) - All in One Framework for Multimodal Re-identification in the Wild [58.380708329455466]
multimodal learning paradigm for ReID introduced, referred to as All-in-One (AIO)
AIO harnesses a frozen pre-trained big model as an encoder, enabling effective multimodal retrieval without additional fine-tuning.
Experiments on cross-modal and multimodal ReID reveal that AIO not only adeptly handles various modal data but also excels in challenging contexts.
arXiv Detail & Related papers (2024-05-08T01:04:36Z) - Generalized User Representations for Transfer Learning [6.953653891411339]
We present a novel framework for user representation in large-scale recommender systems.
Our approach employs a two-stage methodology combining representation learning and transfer learning.
We show how the proposed framework can significantly reduce infrastructure costs compared to alternative approaches.
arXiv Detail & Related papers (2024-03-01T15:05:21Z) - MISSRec: Pre-training and Transferring Multi-modal Interest-aware
Sequence Representation for Recommendation [61.45986275328629]
We propose MISSRec, a multi-modal pre-training and transfer learning framework for sequential recommendation.
On the user side, we design a Transformer-based encoder-decoder model, where the contextual encoder learns to capture the sequence-level multi-modal user interests.
On the candidate item side, we adopt a dynamic fusion module to produce user-adaptive item representation.
arXiv Detail & Related papers (2023-08-22T04:06:56Z) - Learning Large-scale Universal User Representation with Sparse Mixture
of Experts [1.2722697496405464]
We propose SUPERMOE, a generic framework to obtain high quality user representation from multiple tasks.
Specifically, the user behaviour sequences are encoded by MoE transformer, and we can thus increase the model capacity to billions of parameters.
In order to deal with seesaw phenomenon when learning across multiple tasks, we design a new loss function with task indicators.
arXiv Detail & Related papers (2022-07-11T06:19:03Z) - Learning Self-Modulating Attention in Continuous Time Space with
Applications to Sequential Recommendation [102.24108167002252]
We propose a novel attention network, named self-modulating attention, that models the complex and non-linearly evolving dynamic user preferences.
We empirically demonstrate the effectiveness of our method on top-N sequential recommendation tasks, and the results on three large-scale real-world datasets show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2022-03-30T03:54:11Z) - Multi-view Multi-behavior Contrastive Learning in Recommendation [52.42597422620091]
Multi-behavior recommendation (MBR) aims to jointly consider multiple behaviors to improve the target behavior's performance.
We propose a novel Multi-behavior Multi-view Contrastive Learning Recommendation framework.
arXiv Detail & Related papers (2022-03-20T15:13:28Z) - Knowledge-Enhanced Hierarchical Graph Transformer Network for
Multi-Behavior Recommendation [56.12499090935242]
This work proposes a Knowledge-Enhanced Hierarchical Graph Transformer Network (KHGT) to investigate multi-typed interactive patterns between users and items in recommender systems.
KHGT is built upon a graph-structured neural architecture to capture type-specific behavior characteristics.
We show that KHGT consistently outperforms many state-of-the-art recommendation methods across various evaluation settings.
arXiv Detail & Related papers (2021-10-08T09:44:00Z) - Learning Dual Dynamic Representations on Time-Sliced User-Item
Interaction Graphs for Sequential Recommendation [62.30552176649873]
We devise a novel Dynamic Representation Learning model for Sequential Recommendation (DRL-SRe)
To better model the user-item interactions for characterizing the dynamics from both sides, the proposed model builds a global user-item interaction graph for each time slice.
To enable the model to capture fine-grained temporal information, we propose an auxiliary temporal prediction task over consecutive time slices.
arXiv Detail & Related papers (2021-09-24T07:44:27Z) - Interest-oriented Universal User Representation via Contrastive Learning [28.377233340976197]
We attempt to improve universal user representation from two points of views.
A contrastive self-supervised learning paradigm is presented to guide the representation model training.
A novel multi-interest extraction module is presented.
arXiv Detail & Related papers (2021-09-18T07:42:00Z) - Exploiting Behavioral Consistence for Universal User Representation [11.290137806288191]
We focus on developing universal user representation model.
The obtained universal representations are expected to contain rich information.
We propose Self-supervised User Modeling Network (SUMN) to encode behavior data into the universal representation.
arXiv Detail & Related papers (2020-12-11T06:10:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.