Multi-view Multi-behavior Contrastive Learning in Recommendation
- URL: http://arxiv.org/abs/2203.10576v1
- Date: Sun, 20 Mar 2022 15:13:28 GMT
- Title: Multi-view Multi-behavior Contrastive Learning in Recommendation
- Authors: Yiqing Wu, Ruobing Xie, Yongchun Zhu, Xiang Ao, Xin Chen, Xu Zhang,
Fuzhen Zhuang, Leyu Lin, Qing He
- Abstract summary: Multi-behavior recommendation (MBR) aims to jointly consider multiple behaviors to improve the target behavior's performance.
We propose a novel Multi-behavior Multi-view Contrastive Learning Recommendation framework.
- Score: 52.42597422620091
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-behavior recommendation (MBR) aims to jointly consider multiple
behaviors to improve the target behavior's performance. We argue that MBR
models should: (1) model the coarse-grained commonalities between different
behaviors of a user, (2) consider both individual sequence view and global
graph view in multi-behavior modeling, and (3) capture the fine-grained
differences between multiple behaviors of a user. In this work, we propose a
novel Multi-behavior Multi-view Contrastive Learning Recommendation (MMCLR)
framework, including three new CL tasks to solve the above challenges,
respectively. The multi-behavior CL aims to make different user single-behavior
representations of the same user in each view to be similar. The multi-view CL
attempts to bridge the gap between a user's sequence-view and graph-view
representations. The behavior distinction CL focuses on modeling fine-grained
differences of different behaviors. In experiments, we conduct extensive
evaluations and ablation tests to verify the effectiveness of MMCLR and various
CL tasks on two real-world datasets, achieving SOTA performance over existing
baselines. Our code will be available on
\url{https://github.com/wyqing20/MMCLR}
Related papers
- Knowledge-Aware Multi-Intent Contrastive Learning for Multi-Behavior Recommendation [6.522900133742931]
Multi-behavioral recommendation provides users with more accurate choices based on diverse behaviors, such as view, add to cart, and purchase.
We propose a novel model: Knowledge-Aware Multi-Intent Contrastive Learning (KAMCL) model.
This model uses relationships in the knowledge graph to construct intents, aiming to mine the connections between users' multi-behaviors from the perspective of intents to achieve more accurate recommendations.
arXiv Detail & Related papers (2024-04-18T08:39:52Z) - Coarse-to-Fine Knowledge-Enhanced Multi-Interest Learning Framework for
Multi-Behavior Recommendation [52.89816309759537]
Multi-types of behaviors (e.g., clicking, adding to cart, purchasing, etc.) widely exist in most real-world recommendation scenarios.
The state-of-the-art multi-behavior models learn behavior dependencies indistinguishably with all historical interactions as input.
We propose a novel Coarse-to-fine Knowledge-enhanced Multi-interest Learning framework to learn shared and behavior-specific interests for different behaviors.
arXiv Detail & Related papers (2022-08-03T05:28:14Z) - Learning Visual Representation from Modality-Shared Contrastive
Language-Image Pre-training [88.80694147730883]
We investigate a variety of Modality-Shared Contrastive Language-Image Pre-training (MS-CLIP) frameworks.
In studied conditions, we observe that a mostly unified encoder for vision and language signals outperforms all other variations that separate more parameters.
Our approach outperforms vanilla CLIP by 1.6 points in linear probing on a collection of 24 downstream vision tasks.
arXiv Detail & Related papers (2022-07-26T05:19:16Z) - Contrastive Meta Learning with Behavior Multiplicity for Recommendation [42.15990960863924]
A well-informed recommendation framework could not only help users identify their interested items, but also benefit the revenue of various online platforms.
We propose Contrastive Meta Learning (CML) to maintain dedicated cross-type behavior dependency for different users.
Our method consistently outperforms various state-of-the-art recommendation methods.
arXiv Detail & Related papers (2022-02-17T08:51:24Z) - Empowering General-purpose User Representation with Full-life Cycle
Behavior Modeling [11.698166058448555]
We propose a novel framework called full- Life cycle User Representation Model (LURM) to tackle this challenge.
LURM consists of two cascaded sub-models: (I) Bag-of-Interests (BoI) encodes user behaviors in any time period into a sparse vector with super-high dimension (e.g., 105)
SMEN achieves almost dimensionality reduction, benefiting from a novel multi-anchor module which can learn different aspects of user interests.
arXiv Detail & Related papers (2021-10-20T08:24:44Z) - Knowledge-Enhanced Hierarchical Graph Transformer Network for
Multi-Behavior Recommendation [56.12499090935242]
This work proposes a Knowledge-Enhanced Hierarchical Graph Transformer Network (KHGT) to investigate multi-typed interactive patterns between users and items in recommender systems.
KHGT is built upon a graph-structured neural architecture to capture type-specific behavior characteristics.
We show that KHGT consistently outperforms many state-of-the-art recommendation methods across various evaluation settings.
arXiv Detail & Related papers (2021-10-08T09:44:00Z) - Graph Meta Network for Multi-Behavior Recommendation [24.251784947151755]
We propose a Multi-Behavior recommendation framework with Graph Meta Network to incorporate the multi-behavior pattern modeling into a meta-learning paradigm.
Our developed MB-GMN empowers the user-item interaction learning with the capability of uncovering type-dependent behavior representations.
arXiv Detail & Related papers (2021-10-08T08:38:27Z) - Hyper Meta-Path Contrastive Learning for Multi-Behavior Recommendation [61.114580368455236]
User purchasing prediction with multi-behavior information remains a challenging problem for current recommendation systems.
We propose the concept of hyper meta-path to construct hyper meta-paths or hyper meta-graphs to explicitly illustrate the dependencies among different behaviors of a user.
Thanks to the recent success of graph contrastive learning, we leverage it to learn embeddings of user behavior patterns adaptively instead of assigning a fixed scheme to understand the dependencies among different behaviors.
arXiv Detail & Related papers (2021-09-07T04:28:09Z) - Multi-Interactive Attention Network for Fine-grained Feature Learning in
CTR Prediction [48.267995749975476]
In the Click-Through Rate (CTR) prediction scenario, user's sequential behaviors are well utilized to capture the user interest.
Existing methods mostly utilize attention on the behavior of users, which is not always suitable for CTR prediction.
We propose a Multi-Interactive Attention Network (MIAN) to comprehensively extract the latent relationship among all kinds of fine-grained features.
arXiv Detail & Related papers (2020-12-13T05:46:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.