Moment&Cross: Next-Generation Real-Time Cross-Domain CTR Prediction for Live-Streaming Recommendation at Kuaishou
- URL: http://arxiv.org/abs/2408.05709v1
- Date: Sun, 11 Aug 2024 07:00:27 GMT
- Title: Moment&Cross: Next-Generation Real-Time Cross-Domain CTR Prediction for Live-Streaming Recommendation at Kuaishou
- Authors: Jiangxia Cao, Shen Wang, Yue Li, Shenghui Wang, Jian Tang, Shiyao Wang, Shuang Yang, Zhaojie Liu, Guorui Zhou,
- Abstract summary: Kuaishou is one of the largest short-video and live-streaming platform.
Live-streaming recommendation is more complex because of: (1) temporarily-alive to distribution, (2) user may watch for a long time with feedback delay, (3) content is unpredictable and changes over time.
- Score: 23.590638242542347
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Kuaishou, is one of the largest short-video and live-streaming platform, compared with short-video recommendations, live-streaming recommendation is more complex because of: (1) temporarily-alive to distribution, (2) user may watch for a long time with feedback delay, (3) content is unpredictable and changes over time. Actually, even if a user is interested in the live-streaming author, it still may be an negative watching (e.g., short-view < 3s) since the real-time content is not attractive enough. Therefore, for live-streaming recommendation, there exists a challenging task: how do we recommend the live-streaming at right moment for users? Additionally, our platform's major exposure content is short short-video, and the amount of exposed short-video is 9x more than exposed live-streaming. Thus users will leave more behaviors on short-videos, which leads to a serious data imbalance problem making the live-streaming data could not fully reflect user interests. In such case, there raises another challenging task: how do we utilize users' short-video behaviors to make live-streaming recommendation better?
Related papers
- LLM-Alignment Live-Streaming Recommendation [20.817796284487468]
Integrated short-video and live-streaming platforms have gained massive global adoption, offering dynamic content creation and consumption.
The same live-streaming vastly different experiences depending on when a user watching.
To optimize recommendations, a RecSys must accurately interpret the real-time semantics of live content and align them with user preferences.
arXiv Detail & Related papers (2025-04-07T16:04:00Z) - FARM: Frequency-Aware Model for Cross-Domain Live-Streaming Recommendation [24.07417561307543]
We propose a Frequency-Aware Model for Cross-Domain Live-Streaming Recommendation, termed as FARM.
Our FARM has been deployed in online live-streaming services and currently serves hundreds of millions of users on Kuaishou.
arXiv Detail & Related papers (2025-02-13T14:44:15Z) - LiveForesighter: Generating Future Information for Live-Streaming Recommendations at Kuaishou [20.689363722025163]
Live-streaming is a new-generation media to connect users and authors.
Live-streaming content is dynamically ever-changing along time.
How to discover the live-streamings that the content user is interested in at the current moment?
arXiv Detail & Related papers (2025-02-10T15:24:55Z) - V^3: Viewing Volumetric Videos on Mobiles via Streamable 2D Dynamic Gaussians [53.614560799043545]
V3 (Viewing Volumetric Videos) is a novel approach that enables high-quality mobile rendering through the streaming of dynamic Gaussians.
Our key innovation is to view dynamic 3DGS as 2D videos, facilitating the use of hardware video codecs.
As the first to stream dynamic Gaussians on mobile devices, our companion player offers users an unprecedented volumetric video experience.
arXiv Detail & Related papers (2024-09-20T16:54:27Z) - Counteracting Duration Bias in Video Recommendation via Counterfactual Watch Time [63.844468159126826]
Watch time prediction suffers from duration bias, hindering its ability to reflect users' interests accurately.
Counterfactual Watch Model (CWM) is proposed, revealing that CWT equals the time users get the maximum benefit from video recommender systems.
arXiv Detail & Related papers (2024-06-12T06:55:35Z) - A Vlogger-augmented Graph Neural Network Model for Micro-video Recommendation [7.54949302096348]
We propose a vlogger-augmented graph neural network model VA-GNN, which takes the effect of vloggers into consideration.
Specifically, we construct a tripartite graph with users, micro-videos, and vloggers as nodes, capturing user preferences from different views.
When predicting the next user-video interaction, we adaptively combine the user preferences for a video itself and its vlogger.
arXiv Detail & Related papers (2024-05-28T15:13:29Z) - Temporal Sentence Grounding in Streaming Videos [60.67022943824329]
This paper aims to tackle a novel task - Temporal Sentence Grounding in Streaming Videos (TSGSV)
The goal of TSGSV is to evaluate the relevance between a video stream and a given sentence query.
We propose two novel methods: (1) a TwinNet structure that enables the model to learn about upcoming events; and (2) a language-guided feature compressor that eliminates redundant visual frames.
arXiv Detail & Related papers (2023-08-14T12:30:58Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - LiveSeg: Unsupervised Multimodal Temporal Segmentation of Long
Livestream Videos [82.48910259277984]
Livestream tutorial videos are usually hours long, recorded, and uploaded to the Internet directly after the live sessions, making it hard for other people to catch up quickly.
An outline will be a beneficial solution, which requires the video to be temporally segmented according to topics.
We propose LiveSeg, an unsupervised Livestream video temporal solution, which takes advantage of multimodal features from different domains.
arXiv Detail & Related papers (2022-10-12T00:08:17Z) - Modeling Live Video Streaming: Real-Time Classification, QoE Inference,
and Field Evaluation [1.4353812560047186]
ReCLive is a machine learning method for live video detection and QoE measurement based on network-level behavioral characteristics.
We analyze about 23,000 video streams from Twitch and YouTube, and identify key features in their traffic profile that differentiate live and on-demand streaming.
Our solution provides ISPs with fine-grained visibility into live video streams, enabling them to measure and improve user experience.
arXiv Detail & Related papers (2021-12-05T17:53:06Z) - The Role of "Live" in Livestreaming Markets: Evidence Using Orthogonal
Random Forest [5.993591729907003]
We estimate how demand responds to price before, on the day of, and after the livestream.
We find significant dynamics in the price elasticity of demand over the temporal distance to the scheduled livestreaming day and after.
We provide suggestive evidence for the likely mechanisms driving our results.
arXiv Detail & Related papers (2021-07-04T13:50:54Z) - Long Short-Term Relation Networks for Video Action Detection [155.13392337831166]
Long Short-Term Relation Networks (LSTR) are presented in this paper.
LSTR aggregates and propagates relation to augment features for video action detection.
Extensive experiments are conducted on four benchmark datasets.
arXiv Detail & Related papers (2020-03-31T10:02:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.