A Behavior-aware Graph Convolution Network Model for Video
Recommendation
- URL: http://arxiv.org/abs/2106.15402v1
- Date: Sun, 27 Jun 2021 08:24:45 GMT
- Title: A Behavior-aware Graph Convolution Network Model for Video
Recommendation
- Authors: Wei Zhuo, Kunchi Liu, Taofeng Xue, Beihong Jin, Beibei Li, Xinzhou
Dong, He Chen, Wenhai Pan, Xuejian Zhang, Shuo Zhou
- Abstract summary: We present a model named Sagittarius to capture the influence between users and videos.
Sagittarius differentiates between different user behaviors by weighting.
It then fuses the semantics of user behaviors into the embeddings of users and videos.
- Score: 9.589431810005774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interactions between users and videos are the major data source of performing
video recommendation. Despite lots of existing recommendation methods, user
behaviors on videos, which imply the complex relations between users and
videos, are still far from being fully explored. In the paper, we present a
model named Sagittarius. Sagittarius adopts a graph convolutional neural
network to capture the influence between users and videos. In particular,
Sagittarius differentiates between different user behaviors by weighting and
fuses the semantics of user behaviors into the embeddings of users and videos.
Moreover, Sagittarius combines multiple optimization objectives to learn user
and video embeddings and then achieves the video recommendation by the learned
user and video embeddings. The experimental results on multiple datasets show
that Sagittarius outperforms several state-of-the-art models in terms of
recall, unique recall and NDCG.
Related papers
- HAVANA: Hierarchical stochastic neighbor embedding for Accelerated Video ANnotAtions [59.71751978599567]
This paper presents a novel annotation pipeline that uses pre-extracted features and dimensionality reduction to accelerate the temporal video annotation process.
We demonstrate significant improvements in annotation effort compared to traditional linear methods, achieving more than a 10x reduction in clicks required for annotating over 12 hours of video.
arXiv Detail & Related papers (2024-09-16T18:15:38Z) - A Vlogger-augmented Graph Neural Network Model for Micro-video Recommendation [7.54949302096348]
We propose a vlogger-augmented graph neural network model VA-GNN, which takes the effect of vloggers into consideration.
Specifically, we construct a tripartite graph with users, micro-videos, and vloggers as nodes, capturing user preferences from different views.
When predicting the next user-video interaction, we adaptively combine the user preferences for a video itself and its vlogger.
arXiv Detail & Related papers (2024-05-28T15:13:29Z) - Knowledge-Aware Multi-Intent Contrastive Learning for Multi-Behavior Recommendation [6.522900133742931]
Multi-behavioral recommendation provides users with more accurate choices based on diverse behaviors, such as view, add to cart, and purchase.
We propose a novel model: Knowledge-Aware Multi-Intent Contrastive Learning (KAMCL) model.
This model uses relationships in the knowledge graph to construct intents, aiming to mine the connections between users' multi-behaviors from the perspective of intents to achieve more accurate recommendations.
arXiv Detail & Related papers (2024-04-18T08:39:52Z) - A Large Language Model Enhanced Sequential Recommender for Joint Video and Comment Recommendation [77.42486522565295]
We propose a novel recommendation approach called LSVCR to jointly conduct personalized video and comment recommendation.
Our approach consists of two key components, namely sequential recommendation (SR) model and supplemental large language model (LLM) recommender.
In particular, we achieve a significant overall gain of 4.13% in comment watch time.
arXiv Detail & Related papers (2024-03-20T13:14:29Z) - Multi-Behavior Enhanced Recommendation with Cross-Interaction
Collaborative Relation Modeling [42.6279077675585]
This work proposes a Graph Neural Multi-Behavior Enhanced Recommendation framework.
It explicitly models the dependencies between different types of user-item interactions under a graph-based message passing architecture.
Experiments on real-world recommendation datasets show that our GNMR consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-01-07T03:12:37Z) - Knowledge-Enhanced Hierarchical Graph Transformer Network for
Multi-Behavior Recommendation [56.12499090935242]
This work proposes a Knowledge-Enhanced Hierarchical Graph Transformer Network (KHGT) to investigate multi-typed interactive patterns between users and items in recommender systems.
KHGT is built upon a graph-structured neural architecture to capture type-specific behavior characteristics.
We show that KHGT consistently outperforms many state-of-the-art recommendation methods across various evaluation settings.
arXiv Detail & Related papers (2021-10-08T09:44:00Z) - Concept-Aware Denoising Graph Neural Network for Micro-Video
Recommendation [30.67251766249372]
We propose a novel concept-aware denoising graph neural network (named CONDE) for micro-video recommendation.
The proposed CONDE achieves significantly better recommendation performance than the existing state-of-the-art solutions.
arXiv Detail & Related papers (2021-09-28T07:02:52Z) - Hyper Meta-Path Contrastive Learning for Multi-Behavior Recommendation [61.114580368455236]
User purchasing prediction with multi-behavior information remains a challenging problem for current recommendation systems.
We propose the concept of hyper meta-path to construct hyper meta-paths or hyper meta-graphs to explicitly illustrate the dependencies among different behaviors of a user.
Thanks to the recent success of graph contrastive learning, we leverage it to learn embeddings of user behavior patterns adaptively instead of assigning a fixed scheme to understand the dependencies among different behaviors.
arXiv Detail & Related papers (2021-09-07T04:28:09Z) - ASCNet: Self-supervised Video Representation Learning with
Appearance-Speed Consistency [62.38914747727636]
We study self-supervised video representation learning, which is a challenging task due to 1) a lack of labels for explicit supervision and 2) unstructured and noisy visual information.
Existing methods mainly use contrastive loss with video clips as the instances and learn visual representation by discriminating instances from each other.
In this paper, we observe that the consistency between positive samples is the key to learn robust video representations.
arXiv Detail & Related papers (2021-06-04T08:44:50Z) - CoCon: Cooperative-Contrastive Learning [52.342936645996765]
Self-supervised visual representation learning is key for efficient video analysis.
Recent success in learning image representations suggests contrastive learning is a promising framework to tackle this challenge.
We introduce a cooperative variant of contrastive learning to utilize complementary information across views.
arXiv Detail & Related papers (2021-04-30T05:46:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.