Network-Based Video Recommendation Using Viewing Patterns and Modularity Analysis: An Integrated Framework
- URL: http://arxiv.org/abs/2308.12743v3
- Date: Wed, 08 Jan 2025 07:00:36 GMT
- Title: Network-Based Video Recommendation Using Viewing Patterns and Modularity Analysis: An Integrated Framework
- Authors: Mehrdad Maghsoudi, Mohammad Hossein valikhani, Mohammad Hossein Zohdi,
- Abstract summary: This research introduces a novel approach by combining implicit user data, such as viewing percentages, with social network analysis to enhance personalization in VOD platforms.
The system was evaluated on a documentary-focused VOD platform with 328 users over four months.
Results showed significant improvements: a 63% increase in click-through rate (CTR), a 24% increase in view completion rate, and a 17% improvement in user satisfaction.
- Score: 1.2289361708127877
- License:
- Abstract: The proliferation of video-on-demand (VOD) services has led to a paradox of choice, overwhelming users with vast content libraries and revealing limitations in current recommender systems. This research introduces a novel approach by combining implicit user data, such as viewing percentages, with social network analysis to enhance personalization in VOD platforms. The methodology constructs user-item interaction graphs based on viewing patterns and applies centrality measures (degree, closeness, and betweenness) to identify important videos. Modularity-based clustering groups related content, enabling personalized recommendations. The system was evaluated on a documentary-focused VOD platform with 328 users over four months. Results showed significant improvements: a 63% increase in click-through rate (CTR), a 24% increase in view completion rate, and a 17% improvement in user satisfaction. The approach outperformed traditional methods like Naive Bayes and SVM. Future research should explore advanced techniques, such as matrix factorization models, graph neural networks, and hybrid approaches combining content-based and collaborative filtering. Additionally, incorporating temporal models and addressing scalability challenges for large-scale platforms are essential next steps. This study contributes to the state of the art by introducing modularity-based clustering and ego-centric ranking methods to enhance personalization in video recommendations. The findings suggest that integrating network-based features and implicit feedback can significantly improve user engagement, offering a cost-effective solution for VOD platforms to enhance recommendation quality.
Related papers
- Interactive Visualization Recommendation with Hier-SUCB [52.11209329270573]
We propose an interactive personalized visualization recommendation (PVisRec) system that learns on user feedback from previous interactions.
For more interactive and accurate recommendations, we propose Hier-SUCB, a contextual semi-bandit in the PVisRec setting.
arXiv Detail & Related papers (2025-02-05T17:14:45Z) - A Large Language Model Enhanced Sequential Recommender for Joint Video and Comment Recommendation [77.42486522565295]
We propose a novel recommendation approach called LSVCR to jointly conduct personalized video and comment recommendation.
Our approach consists of two key components, namely sequential recommendation (SR) model and supplemental large language model (LLM) recommender.
In particular, we achieve a significant overall gain of 4.13% in comment watch time.
arXiv Detail & Related papers (2024-03-20T13:14:29Z) - Neural Graph Collaborative Filtering Using Variational Inference [19.80976833118502]
We introduce variational embedding collaborative filtering (GVECF) as a novel framework to incorporate representations learned through a variational graph auto-encoder.
Our proposed method achieves up to 13.78% improvement in the recall over the test data.
arXiv Detail & Related papers (2023-11-20T15:01:33Z) - EvalCrafter: Benchmarking and Evaluating Large Video Generation Models [70.19437817951673]
We argue that it is hard to judge the large conditional generative models from the simple metrics since these models are often trained on very large datasets with multi-aspect abilities.
Our approach involves generating a diverse and comprehensive list of 700 prompts for text-to-video generation.
Then, we evaluate the state-of-the-art video generative models on our carefully designed benchmark, in terms of visual qualities, content qualities, motion qualities, and text-video alignment with 17 well-selected objective metrics.
arXiv Detail & Related papers (2023-10-17T17:50:46Z) - ClusterSeq: Enhancing Sequential Recommender Systems with Clustering
based Meta-Learning [3.168790535780547]
ClusterSeq is a Meta-Learning Clustering-Based Sequential Recommender System.
It exploits dynamic information in the user sequence to enhance item prediction accuracy, even in the absence of side information.
Our proposed approach achieves a substantial improvement of 16-39% in Mean Reciprocal Rank (MRR)
arXiv Detail & Related papers (2023-07-25T18:53:24Z) - Ordinal Graph Gamma Belief Network for Social Recommender Systems [54.9487910312535]
We develop a hierarchical Bayesian model termed ordinal graph factor analysis (OGFA), which jointly models user-item and user-user interactions.
OGFA not only achieves good recommendation performance, but also extracts interpretable latent factors corresponding to representative user preferences.
We extend OGFA to ordinal graph gamma belief network, which is a multi-stochastic-layer deep probabilistic model.
arXiv Detail & Related papers (2022-09-12T09:19:22Z) - Hypergraph Contrastive Collaborative Filtering [44.8586906335262]
We propose a new self-supervised recommendation framework Hypergraph Contrastive Collaborative Filtering (HCCF)
HCCF captures local and global collaborative relations with a hypergraph-enhanced cross-view contrastive learning architecture.
Our model effectively integrates the hypergraph structure encoding with self-supervised learning to reinforce the representation quality of recommender systems.
arXiv Detail & Related papers (2022-04-26T10:06:04Z) - Modeling High-order Interactions across Multi-interests for Micro-video
Reommendation [65.16624625748068]
We propose a Self-over-Co Attention module to enhance user's interest representation.
In particular, we first use co-attention to model correlation patterns across different levels and then use self-attention to model correlation patterns within a specific level.
arXiv Detail & Related papers (2021-04-01T07:20:15Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z) - SalSum: Saliency-based Video Summarization using Generative Adversarial
Networks [6.45481313278967]
We propose a novel VS approach based on a Generative Adversarial Network (GAN) model-trained with human eye fixations.
The proposed method is evaluated in comparison to state-of-the-art VS approaches on benchmark dataset VSUMM.
arXiv Detail & Related papers (2020-11-20T14:53:08Z) - Learning User Representations with Hypercuboids for Recommender Systems [26.80987554753327]
Our model explicitly models user interests as a hypercuboid instead of a point in the space.
We present two variants of hypercuboids to enhance the capability in capturing the diversities of user interests.
A neural architecture is also proposed to facilitate user hypercuboid learning by capturing the activity sequences (e.g., buy and rate) of users.
arXiv Detail & Related papers (2020-11-11T12:50:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.