Exploiting Behavioral Consistence for Universal User Representation
- URL: http://arxiv.org/abs/2012.06146v1
- Date: Fri, 11 Dec 2020 06:10:14 GMT
- Title: Exploiting Behavioral Consistence for Universal User Representation
- Authors: Jie Gu, Feng Wang, Qinghui Sun, Zhiquan Ye, Xiaoxiao Xu, Jingmin Chen,
Jun Zhang
- Abstract summary: We focus on developing universal user representation model.
The obtained universal representations are expected to contain rich information.
We propose Self-supervised User Modeling Network (SUMN) to encode behavior data into the universal representation.
- Score: 11.290137806288191
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: User modeling is critical for developing personalized services in industry. A
common way for user modeling is to learn user representations that can be
distinguished by their interests or preferences. In this work, we focus on
developing universal user representation model. The obtained universal
representations are expected to contain rich information, and be applicable to
various downstream applications without further modifications (e.g., user
preference prediction and user profiling). Accordingly, we can be free from the
heavy work of training task-specific models for every downstream task as in
previous works. In specific, we propose Self-supervised User Modeling Network
(SUMN) to encode behavior data into the universal representation. It includes
two key components. The first one is a new learning objective, which guides the
model to fully identify and preserve valuable user information under a
self-supervised learning framework. The other one is a multi-hop aggregation
layer, which benefits the model capacity in aggregating diverse behaviors.
Extensive experiments on benchmark datasets show that our approach can
outperform state-of-the-art unsupervised representation methods, and even
compete with supervised ones.
Related papers
- Adaptive Learning on User Segmentation: Universal to Specific Representation via Bipartite Neural Interaction [15.302921887305283]
We propose a novel learning framework that can first learn general universal user representation through information bottleneck.
Then, merge and learn a segmentation-specific or a task-specific representation through neural interaction.
Our proposed method is evaluated in two open-source benchmarks, two offline business datasets, and deployed on two online marketing applications to predict users' CVR.
arXiv Detail & Related papers (2024-09-23T12:02:23Z) - Generalized User Representations for Transfer Learning [6.953653891411339]
We present a novel framework for user representation in large-scale recommender systems.
Our approach employs a two-stage methodology combining representation learning and transfer learning.
We show how the proposed framework can significantly reduce infrastructure costs compared to alternative approaches.
arXiv Detail & Related papers (2024-03-01T15:05:21Z) - Fantastic Gains and Where to Find Them: On the Existence and Prospect of
General Knowledge Transfer between Any Pretrained Model [74.62272538148245]
We show that for arbitrary pairings of pretrained models, one model extracts significant data context unavailable in the other.
We investigate if it is possible to transfer such "complementary" knowledge from one model to another without performance degradation.
arXiv Detail & Related papers (2023-10-26T17:59:46Z) - Prototype-guided Cross-task Knowledge Distillation for Large-scale
Models [103.04711721343278]
Cross-task knowledge distillation helps to train a small student model to obtain a competitive performance.
We propose a Prototype-guided Cross-task Knowledge Distillation (ProC-KD) approach to transfer the intrinsic local-level object knowledge of a large-scale teacher network to various task scenarios.
arXiv Detail & Related papers (2022-12-26T15:00:42Z) - Latent User Intent Modeling for Sequential Recommenders [92.66888409973495]
Sequential recommender models learn to predict the next items a user is likely to interact with based on his/her interaction history on the platform.
Most sequential recommenders however lack a higher-level understanding of user intents, which often drive user behaviors online.
Intent modeling is thus critical for understanding users and optimizing long-term user experience.
arXiv Detail & Related papers (2022-11-17T19:00:24Z) - Scaling Law for Recommendation Models: Towards General-purpose User
Representations [3.3073775218038883]
We explore the possibility of general-purpose user representation learning by training a universal user encoder at large scales.
We show that the scaling law holds in the user modeling areas, where the training error scales as a power-law with the amount of compute.
We also investigate how the performance changes according to the scale-up factors, i.e., model capacity, sequence length and batch size.
arXiv Detail & Related papers (2021-11-15T10:39:29Z) - Empowering General-purpose User Representation with Full-life Cycle
Behavior Modeling [11.698166058448555]
We propose a novel framework called full- Life cycle User Representation Model (LURM) to tackle this challenge.
LURM consists of two cascaded sub-models: (I) Bag-of-Interests (BoI) encodes user behaviors in any time period into a sparse vector with super-high dimension (e.g., 105)
SMEN achieves almost dimensionality reduction, benefiting from a novel multi-anchor module which can learn different aspects of user interests.
arXiv Detail & Related papers (2021-10-20T08:24:44Z) - Learning Dual Dynamic Representations on Time-Sliced User-Item
Interaction Graphs for Sequential Recommendation [62.30552176649873]
We devise a novel Dynamic Representation Learning model for Sequential Recommendation (DRL-SRe)
To better model the user-item interactions for characterizing the dynamics from both sides, the proposed model builds a global user-item interaction graph for each time slice.
To enable the model to capture fine-grained temporal information, we propose an auxiliary temporal prediction task over consecutive time slices.
arXiv Detail & Related papers (2021-09-24T07:44:27Z) - Interest-oriented Universal User Representation via Contrastive Learning [28.377233340976197]
We attempt to improve universal user representation from two points of views.
A contrastive self-supervised learning paradigm is presented to guide the representation model training.
A novel multi-interest extraction module is presented.
arXiv Detail & Related papers (2021-09-18T07:42:00Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Disentangled Graph Collaborative Filtering [100.26835145396782]
Disentangled Graph Collaborative Filtering (DGCF) is a new model for learning informative representations of users and items from interaction data.
By modeling a distribution over intents for each user-item interaction, we iteratively refine the intent-aware interaction graphs and representations.
DGCF achieves significant improvements over several state-of-the-art models like NGCF, DisenGCN, and MacridVAE.
arXiv Detail & Related papers (2020-07-03T15:37:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.