LiveForesighter: Generating Future Information for Live-Streaming Recommendations at Kuaishou
- URL: http://arxiv.org/abs/2502.06557v1
- Date: Mon, 10 Feb 2025 15:24:55 GMT
- Title: LiveForesighter: Generating Future Information for Live-Streaming Recommendations at Kuaishou
- Authors: Yucheng Lu, Jiangxia Cao, Xu Kuan, Wei Cheng, Wei Jiang, Jiaming Zhang, Yang Shuang, Liu Zhaojie, Liyin Hong,
- Abstract summary: Live-streaming is a new-generation media to connect users and authors.<n>Live-streaming content is dynamically ever-changing along time.<n>How to discover the live-streamings that the content user is interested in at the current moment?
- Score: 20.689363722025163
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Live-streaming, as a new-generation media to connect users and authors, has attracted a lot of attention and experienced rapid growth in recent years. Compared with the content-static short-video recommendation, the live-streaming recommendation faces more challenges in giving our users a satisfactory experience: (1) Live-streaming content is dynamically ever-changing along time. (2) valuable behaviors (e.g., send digital-gift, buy products) always require users to watch for a long-time (>10 min). Combining the two attributes, here raising a challenging question for live-streaming recommendation: How to discover the live-streamings that the content user is interested in at the current moment, and further a period in the future?
Related papers
- LLM-Alignment Live-Streaming Recommendation [20.817796284487468]
Integrated short-video and live-streaming platforms have gained massive global adoption, offering dynamic content creation and consumption.
The same live-streaming vastly different experiences depending on when a user watching.
To optimize recommendations, a RecSys must accurately interpret the real-time semantics of live content and align them with user preferences.
arXiv Detail & Related papers (2025-04-07T16:04:00Z) - FARM: Frequency-Aware Model for Cross-Domain Live-Streaming Recommendation [24.07417561307543]
We propose a Frequency-Aware Model for Cross-Domain Live-Streaming Recommendation, termed as FARM.
Our FARM has been deployed in online live-streaming services and currently serves hundreds of millions of users on Kuaishou.
arXiv Detail & Related papers (2025-02-13T14:44:15Z) - TeaserGen: Generating Teasers for Long Documentaries [59.8220642722399]
We present DocumentaryNet, a collection of 1,269 documentaries paired with their teasers.
We propose a new two-stage system for generating teasers from long documentaries.
arXiv Detail & Related papers (2024-10-08T01:00:09Z) - Moment&Cross: Next-Generation Real-Time Cross-Domain CTR Prediction for Live-Streaming Recommendation at Kuaishou [23.590638242542347]
Kuaishou is one of the largest short-video and live-streaming platform.
Live-streaming recommendation is more complex because of: (1) temporarily-alive to distribution, (2) user may watch for a long time with feedback delay, (3) content is unpredictable and changes over time.
arXiv Detail & Related papers (2024-08-11T07:00:27Z) - MMBee: Live Streaming Gift-Sending Recommendations via Multi-Modal Fusion and Behaviour Expansion [18.499672566131355]
Accurately modeling the gifting interaction not only enhances users' experience but also increases streamers' revenue.
Previous studies on live streaming gifting prediction treat this task as a conventional recommendation problem.
We propose MMBee based on real-time Multi-Modal Fusion and Behaviour Expansion to address these issues.
arXiv Detail & Related papers (2024-06-15T04:59:00Z) - AIS 2024 Challenge on Video Quality Assessment of User-Generated Content: Methods and Results [140.47245070508353]
This paper reviews the AIS 2024 Video Quality Assessment (VQA) Challenge, focused on User-Generated Content (UGC)
The aim of this challenge is to gather deep learning-based methods capable of estimating perceptual quality of videos.
The user-generated videos from the YouTube dataset include diverse content (sports, games, lyrics, anime, etc.), quality and resolutions.
arXiv Detail & Related papers (2024-04-24T21:02:14Z) - LiveChat: Video Comment Generation from Audio-Visual Multimodal Contexts [8.070778830276275]
We create a large-scale audio-visual multimodal dialogue dataset to facilitate the development of live commenting technologies.
The data is collected from Twitch, with 11 different categories and 575 streamers for a total of 438 hours of video and 3.2 million comments.
We propose a novel multimodal generation model capable of generating live comments that align with the temporal and spatial events within the video.
arXiv Detail & Related papers (2023-10-01T02:35:58Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - LiveSeg: Unsupervised Multimodal Temporal Segmentation of Long
Livestream Videos [82.48910259277984]
Livestream tutorial videos are usually hours long, recorded, and uploaded to the Internet directly after the live sessions, making it hard for other people to catch up quickly.
An outline will be a beneficial solution, which requires the video to be temporally segmented according to topics.
We propose LiveSeg, an unsupervised Livestream video temporal solution, which takes advantage of multimodal features from different domains.
arXiv Detail & Related papers (2022-10-12T00:08:17Z) - Tutorial Recommendation for Livestream Videos using Discourse-Level
Consistency and Ontology-Based Filtering [75.78484403289228]
We present a novel dataset and model for the task of tutorial recommendation for live-streamed videos.
A system can analyze the content of the live streaming video and recommend the most relevant tutorials.
arXiv Detail & Related papers (2022-09-11T22:45:57Z) - Generating Long Videos of Dynamic Scenes [66.56925105992472]
We present a video generation model that reproduces object motion, changes in camera viewpoint, and new content that arises over time.
A common failure case is for content to never change due to over-reliance on inductive biases to provide temporal consistency.
arXiv Detail & Related papers (2022-06-07T16:29:51Z) - The Role of "Live" in Livestreaming Markets: Evidence Using Orthogonal
Random Forest [5.993591729907003]
We estimate how demand responds to price before, on the day of, and after the livestream.
We find significant dynamics in the price elasticity of demand over the temporal distance to the scheduled livestreaming day and after.
We provide suggestive evidence for the likely mechanisms driving our results.
arXiv Detail & Related papers (2021-07-04T13:50:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.