Analyzing User Engagement with TikTok's Short Format Video Recommendations using Data Donations
- URL: http://arxiv.org/abs/2301.04945v2
- Date: Wed, 20 Mar 2024 09:22:44 GMT
- Title: Analyzing User Engagement with TikTok's Short Format Video Recommendations using Data Donations
- Authors: Savvas Zannettou, Olivia-Nemes Nemeth, Oshrat Ayalon, Angelica Goetzen, Krishna P. Gummadi, Elissa M. Redmiles, Franziska Roesner,
- Abstract summary: We analyze user engagement on TikTok using data we collect via a data donation system.
We find that the average daily usage time increases over the users' lifetime while the user attention remains stable at around 45%.
We also find that users like more videos uploaded by people they follow than those recommended by people they do not follow.
- Score: 31.764672446151412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Short-format videos have exploded on platforms like TikTok, Instagram, and YouTube. Despite this, the research community lacks large-scale empirical studies into how people engage with short-format videos and the role of recommendation systems that offer endless streams of such content. In this work, we analyze user engagement on TikTok using data we collect via a data donation system that allows TikTok users to donate their data. We recruited 347 TikTok users and collected 9.2M TikTok video recommendations they received. By analyzing user engagement, we find that the average daily usage time increases over the users' lifetime while the user attention remains stable at around 45%. We also find that users like more videos uploaded by people they follow than those recommended by people they do not follow. Our study offers valuable insights into how users engage with short-format videos on TikTok and lessons learned from designing a data donation system.
Related papers
- Viblio: Introducing Credibility Signals and Citations to Video-Sharing
Platforms [8.832571289776256]
Viblio is a prototype system that enables YouTube users to view and add citations while watching a video based on participants' needs.
From an evaluation with 12 people, all participants found Viblio to be intuitive and useful in the process of evaluating a video's credibility.
arXiv Detail & Related papers (2024-02-27T05:21:39Z) - User Strategization and Trustworthy Algorithms [81.82279667028423]
We show that user strategization can actually help platforms in the short term.
We then show that it corrupts platforms' data and ultimately hurts their ability to make counterfactual decisions.
arXiv Detail & Related papers (2023-12-29T16:09:42Z) - PIE: Personalized Interest Exploration for Large-Scale Recommender
Systems [0.0]
We present a framework for exploration in large-scale recommender systems to address these challenges.
Our methodology can be easily integrated into an existing large-scale recommender system with minimal modifications.
Our work has been deployed in production on Facebook Watch, a popular video discovery and sharing platform serving billions of users.
arXiv Detail & Related papers (2023-04-13T22:25:09Z) - Personalizing Intervened Network for Long-tailed Sequential User
Behavior Modeling [66.02953670238647]
Tail users suffer from significantly lower-quality recommendation than the head users after joint training.
A model trained on tail users separately still achieve inferior results due to limited data.
We propose a novel approach that significantly improves the recommendation performance of the tail users.
arXiv Detail & Related papers (2022-08-19T02:50:19Z) - An Empirical Investigation of Personalization Factors on TikTok [77.34726150561087]
Despite the importance of TikTok's algorithm to the platform's success and content distribution, little work has been done on the empirical analysis of the algorithm.
Using a sock-puppet audit methodology with a custom algorithm developed by us, we tested and analysed the effect of the language and location used to access TikTok.
We identify that the follow-feature has the strongest influence, followed by the like-feature and video view rate.
arXiv Detail & Related papers (2022-01-28T17:40:00Z) - Slapping Cats, Bopping Heads, and Oreo Shakes: Understanding Indicators
of Virality in TikTok Short Videos [11.089339341624996]
We study what elements of short videos posted on TikTok contribute to their virality.
Our research highlights the characteristics that distinguish viral from non-viral TikTok videos.
arXiv Detail & Related papers (2021-11-03T18:17:16Z) - QVHighlights: Detecting Moments and Highlights in Videos via Natural
Language Queries [89.24431389933703]
We present the Query-based Video Highlights (QVHighlights) dataset.
It consists of over 10,000 YouTube videos, covering a wide range of topics.
Each video in the dataset is annotated with: (1) a human-written free-form NL query, (2) relevant moments in the video w.r.t. the query, and (3) five-point scale saliency scores for all query-relevant clips.
arXiv Detail & Related papers (2021-07-20T16:42:58Z) - Short Video-based Advertisements Evaluation System: Self-Organizing
Learning Approach [22.2568038582329]
We propose a novel end-to-end self-organizing framework for user behavior prediction.
Our model is able to learn the optimal topology of neural network architecture, as well as optimal weights, through training data.
arXiv Detail & Related papers (2020-10-23T20:52:24Z) - Towards End-to-end Video-based Eye-Tracking [50.0630362419371]
Estimating eye-gaze from images alone is a challenging task due to un-observable person-specific factors.
We propose a novel dataset and accompanying method which aims to explicitly learn these semantic and temporal relationships.
We demonstrate that the fusion of information from visual stimuli as well as eye images can lead towards achieving performance similar to literature-reported figures.
arXiv Detail & Related papers (2020-07-26T12:39:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.