Shorter Is Different: Characterizing the Dynamics of Short-Form Video Platforms
- URL: http://arxiv.org/abs/2410.16058v1
- Date: Mon, 21 Oct 2024 14:37:26 GMT
- Title: Shorter Is Different: Characterizing the Dynamics of Short-Form Video Platforms
- Authors: Zhilong Chen, Peijie Liu, Jinghua Piao, Fengli Xu, Yong Li,
- Abstract summary: We conduct a large-scale data-driven analysis of Kuaishou, one of the largest short-form video platforms in China.
Based on 248 million videos uploaded to the platform across all categories, we identify their notable differences from long-form video platforms.
We find that videos are shortened by multiples on Kuaishou, with distinctive categorical distributions over-represented by life-related rather than interest-based videos.
- Score: 10.078299014855622
- License:
- Abstract: The emerging short-form video platforms have been growing tremendously and become one of the leading social media recently. Although the expanded popularity of these platforms has attracted increasing research attention, there has been a lack of understanding of whether and how they deviate from traditional long-form video-sharing platforms such as YouTube and Bilibili. To address this, we conduct a large-scale data-driven analysis of Kuaishou, one of the largest short-form video platforms in China. Based on 248 million videos uploaded to the platform across all categories, we identify their notable differences from long-form video platforms through a comparison study with Bilibili, a leading long-form video platform in China. We find that videos are shortened by multiples on Kuaishou, with distinctive categorical distributions over-represented by life-related rather than interest-based videos. Users interact with videos less per view, but top videos can even more effectively acquire users' collective attention. More importantly, ordinary content creators have higher probabilities of producing hit videos. Our results shed light on the uniqueness of short-form video platforms and pave the way for future research and design for better short-form video ecology.
Related papers
- MUFM: A Mamba-Enhanced Feedback Model for Micro Video Popularity Prediction [1.7040391128945196]
We introduce a framework for capturing long-term dependencies in user feedback and dynamic event interactions.
Our experiments on the large-scale open-source multi-modal dataset show that our model significantly outperforms state-of-the-art approaches by 23.2%.
arXiv Detail & Related papers (2024-11-23T05:13:27Z) - TripletViNet: Mitigating Misinformation Video Spread Across Platforms [3.1492627280939547]
There has been rampant propagation of fake news and misinformation videos on many platforms lately.
Recent research has shown the feasibility of identifying video titles from encrypted network traffic within a single platform.
There are no existing methods for cross-platform video recognition.
arXiv Detail & Related papers (2024-07-15T12:03:23Z) - VidMuse: A Simple Video-to-Music Generation Framework with Long-Short-Term Modeling [71.01050359126141]
We propose VidMuse, a framework for generating music aligned with video inputs.
VidMuse produces high-fidelity music that is both acoustically and semantically aligned with the video.
arXiv Detail & Related papers (2024-06-06T17:58:11Z) - MovieLLM: Enhancing Long Video Understanding with AI-Generated Movies [21.489102981760766]
MovieLLM is a novel framework designed to synthesize consistent and high-quality video data for instruction tuning.
Our experiments validate that the data produced by MovieLLM significantly improves the performance of multimodal models in understanding complex video narratives.
arXiv Detail & Related papers (2024-03-03T07:43:39Z) - Beyond the Frame: Single and mutilple video summarization method with
user-defined length [4.424739166856966]
Video summarizing is a difficult but significant work, with substantial potential for further research and development.
In this paper, we combine a variety of NLP techniques (extractive and contect-based summarizers) with video processing techniques to convert a long video into a single relatively short video.
arXiv Detail & Related papers (2023-12-23T04:32:07Z) - ChatVideo: A Tracklet-centric Multimodal and Versatile Video
Understanding System [119.51012668709502]
We present our vision for multimodal and versatile video understanding and propose a prototype system, system.
Our system is built upon a tracklet-centric paradigm, which treats tracklets as the basic video unit.
All the detected tracklets are stored in a database and interact with the user through a database manager.
arXiv Detail & Related papers (2023-04-27T17:59:58Z) - Video Generation Beyond a Single Clip [76.5306434379088]
Video generation models can only generate video clips that are relatively short compared with the length of real videos.
To generate long videos covering diverse content and multiple events, we propose to use additional guidance to control the video generation process.
The proposed approach is complementary to existing efforts on video generation, which focus on generating realistic video within a fixed time window.
arXiv Detail & Related papers (2023-04-15T06:17:30Z) - Examining the Production of Co-active Channels on YouTube and BitChute [0.0]
This study explores differences in video production across 27 co-active channels on YouTube and BitChute.
We find that the majority of channels use significantly more moral and political words in their video titles on BitChute than in their video titles on YouTube.
In some cases, we find that channels produce videos on different sets of topics across the platforms, often producing content on BitChute that would likely be moderated on YouTube.
arXiv Detail & Related papers (2023-03-14T12:51:46Z) - Long Video Generation with Time-Agnostic VQGAN and Time-Sensitive
Transformer [66.56167074658697]
We present a method that builds on 3D-VQGAN and transformers to generate videos with thousands of frames.
Our evaluation shows that our model trained on 16-frame video clips can generate diverse, coherent, and high-quality long videos.
We also showcase conditional extensions of our approach for generating meaningful long videos by incorporating temporal information with text and audio.
arXiv Detail & Related papers (2022-04-07T17:59:02Z) - 3MASSIV: Multilingual, Multimodal and Multi-Aspect dataset of Social
Media Short Videos [72.69052180249598]
We present 3MASSIV, a multilingual, multimodal and multi-aspect, expertly-annotated dataset of diverse short videos extracted from short-video social media platform - Moj.
3MASSIV comprises of 50k short videos (20 seconds average duration) and 100K unlabeled videos in 11 different languages.
We show how the social media content in 3MASSIV is dynamic and temporal in nature, which can be used for semantic understanding tasks and cross-lingual analysis.
arXiv Detail & Related papers (2022-03-28T02:47:01Z) - Mi YouTube es Su YouTube? Analyzing the Cultures using YouTube
Thumbnails of Popular Videos [98.87558262467257]
This study explores culture preferences among countries using the thumbnails of YouTube trending videos.
Experimental results indicate that the users from similar cultures shares interests in watching similar videos on YouTube.
arXiv Detail & Related papers (2020-01-27T20:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.