Will You Dance To The Challenge? Predicting User Participation of TikTok
Challenges
- URL: http://arxiv.org/abs/2112.13384v1
- Date: Sun, 26 Dec 2021 14:24:28 GMT
- Title: Will You Dance To The Challenge? Predicting User Participation of TikTok
Challenges
- Authors: Lynnette Hui Xian Ng, John Yeh Han Tan, Darryl Jing Heng Tan, Roy
Ka-Wei Lee
- Abstract summary: This paper investigates social contagion of TikTok challenges through predicting a user's participation.
We propose a novel deep learning model, deepChallenger, to learn and combine latent user and challenge representations from past videos.
- Score: 2.2850019869312432
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: TikTok is a popular new social media, where users express themselves through
short video clips. A common form of interaction on the platform is
participating in "challenges", which are songs and dances for users to iterate
upon. Challenge contagion can be measured through replication reach, i.e.,
users uploading videos of their participation in the challenges. The uniqueness
of the TikTok platform where both challenge content and user preferences are
evolving requires the combination of challenge and user representation. This
paper investigates social contagion of TikTok challenges through predicting a
user's participation. We propose a novel deep learning model, deepChallenger,
to learn and combine latent user and challenge representations from past videos
to perform this user-challenge prediction task. We collect a dataset of over
7,000 videos from 12 trending challenges on the ForYouPage, the app's landing
page, and over 10,000 videos from 1303 users. Extensive experiments are
conducted and the results show that our proposed deepChallenger (F1=0.494)
outperforms baselines (F1=0.188) in the prediction task.
Related papers
- Overview of AI-Debater 2023: The Challenges of Argument Generation Tasks [62.443665295250035]
We present the results of the AI-Debater 2023 Challenge held by the Chinese Conference on Affect Computing (CCAC 2023)
In total, 32 competing teams register for the challenge, from which we received 11 successful submissions.
arXiv Detail & Related papers (2024-07-20T10:13:54Z) - The SkatingVerse Workshop & Challenge: Methods and Results [137.81522563074287]
The SkatingVerse Workshop & Challenge aims to encourage research in developing novel and accurate methods for human action understanding.
The dataset used for the SkatingVerse Challenge has been publicly released.
Around 10 participating teams from the globe competed in the SkatingVerse Challenge.
arXiv Detail & Related papers (2024-05-27T14:12:07Z) - AIS 2024 Challenge on Video Quality Assessment of User-Generated Content: Methods and Results [140.47245070508353]
This paper reviews the AIS 2024 Video Quality Assessment (VQA) Challenge, focused on User-Generated Content (UGC)
The aim of this challenge is to gather deep learning-based methods capable of estimating perceptual quality of videos.
The user-generated videos from the YouTube dataset include diverse content (sports, games, lyrics, anime, etc.), quality and resolutions.
arXiv Detail & Related papers (2024-04-24T21:02:14Z) - Are you Struggling? Dataset and Baselines for Struggle Determination in
Assembly Videos [4.631245639292796]
We present a new dataset with three assembly activities and corresponding performance baselines for the determination of struggle from video.
Video segments were scored w.r.t. the level of struggle as perceived by annotators using a forced choice 4-point scale.
The dataset is the first struggle annotation dataset and contains 5.1 hours of video and 725,100 frames from 73 participants in total.
arXiv Detail & Related papers (2024-02-16T20:12:33Z) - Lightweight Boosting Models for User Response Prediction Using
Adversarial Validation [2.4040470282119983]
The ACM RecSys Challenge 2023, organized by ShareChat, aims to predict the probability of the app being installed.
This paper describes the lightweight solution to this challenge.
arXiv Detail & Related papers (2023-10-05T13:57:05Z) - NTIRE 2023 Quality Assessment of Video Enhancement Challenge [97.809937484099]
This paper reports on the NTIRE 2023 Quality Assessment of Video Enhancement Challenge.
The challenge is to address a major challenge in the field of video processing, namely, video quality assessment (VQA) for enhanced videos.
The challenge has a total of 167 registered participants.
arXiv Detail & Related papers (2023-07-19T02:33:42Z) - Analyzing User Engagement with TikTok's Short Format Video Recommendations using Data Donations [31.764672446151412]
We analyze user engagement on TikTok using data we collect via a data donation system.
We find that the average daily usage time increases over the users' lifetime while the user attention remains stable at around 45%.
We also find that users like more videos uploaded by people they follow than those recommended by people they do not follow.
arXiv Detail & Related papers (2023-01-12T11:34:45Z) - The Runner-up Solution for YouTube-VIS Long Video Challenge 2022 [72.13080661144761]
We adopt the previously proposed online video instance segmentation method IDOL for this challenge.
We use pseudo labels to further help contrastive learning, so as to obtain more temporally consistent instance embedding.
The proposed method obtains 40.2 AP on the YouTube-VIS 2022 long video dataset and was ranked second in this challenge.
arXiv Detail & Related papers (2022-11-18T01:40:59Z) - ReLER@ZJU-Alibaba Submission to the Ego4D Natural Language Queries
Challenge 2022 [61.81899056005645]
Given a video clip and a text query, the goal of this challenge is to locate a temporal moment of the video clip where the answer to the query can be obtained.
We propose a multi-scale cross-modal transformer and a video frame-level contrastive loss to fully uncover the correlation between language queries and video clips.
The experimental results demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2022-07-01T12:48:35Z) - Will You Ever Become Popular? Learning to Predict Virality of Dance
Clips [41.2877440857042]
We propose a novel multi-modal framework which integrates skeletal, holistic appearance, facial and scenic cues.
To model body movements, we propose a pyramidal skeleton graph convolutional network (PSGCN) which hierarchically refines-temporal skeleton graphs.
To validate our method, we introduce a large-scale viral dance video (VDV) dataset, which contains over 4,000 dance clips of eight viral dance challenges.
arXiv Detail & Related papers (2021-11-06T07:26:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.