Generalized Delayed Feedback Model with Post-Click Information in
Recommender Systems
- URL: http://arxiv.org/abs/2206.00407v1
- Date: Wed, 1 Jun 2022 11:17:01 GMT
- Title: Generalized Delayed Feedback Model with Post-Click Information in
Recommender Systems
- Authors: Jia-Qi Yang, De-Chuan Zhan
- Abstract summary: We show that post-click user behaviors are also informative to conversion rate prediction and can be used to improve timeliness.
We propose a generalized delayed feedback model (GDFM) that unifies both post-click behaviors and early conversions as post-click information.
- Score: 37.72697954740977
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Predicting conversion rate (e.g., the probability that a user will purchase
an item) is a fundamental problem in machine learning based recommender
systems. However, accurate conversion labels are revealed after a long delay,
which harms the timeliness of recommender systems. Previous literature
concentrates on utilizing early conversions to mitigate such a delayed feedback
problem. In this paper, we show that post-click user behaviors are also
informative to conversion rate prediction and can be used to improve
timeliness. We propose a generalized delayed feedback model (GDFM) that unifies
both post-click behaviors and early conversions as stochastic post-click
information, which could be utilized to train GDFM in a streaming manner
efficiently. Based on GDFM, we further establish a novel perspective that the
performance gap introduced by delayed feedback can be attributed to a temporal
gap and a sampling gap. Inspired by our analysis, we propose to measure the
quality of post-click information with a combination of temporal distance and
sample complexity. The training objective is re-weighted accordingly to
highlight informative and timely signals. We validate our analysis on public
datasets, and experimental performance confirms the effectiveness of our
method.
Related papers
- Time Matters: Enhancing Pre-trained News Recommendation Models with Robust User Dwell Time Injection [5.460517407836779]
Large Language Models (LLMs) have revolutionized text comprehension, leading to State-of-the-Art (SOTA) news recommendation models.
accurately modeling user preferences remains challenging due to the inherent uncertainty of click behaviors.
This paper proposes two novel and robust dwell time injection strategies, namely Dwell time Weight (DweW) and Dwell time Aware (DweA)
arXiv Detail & Related papers (2024-05-21T04:08:07Z) - Autoregressive Queries for Adaptive Tracking with Spatio-TemporalTransformers [55.46413719810273]
rich-temporal information is crucial to the complicated target appearance in visual tracking.
Our method improves the tracker's performance on six popular tracking benchmarks.
arXiv Detail & Related papers (2024-03-15T02:39:26Z) - A Machine Learning Approach to Improving Timing Consistency between
Global Route and Detailed Route [3.202646674984817]
Inaccurate timing prediction wastes design effort, hurts circuit performance, and may lead to design failure.
This work focuses on timing prediction after clock tree synthesis and placement legalization, which is the earliest opportunity to time and optimize a "complete" netlist.
To bridge the gap between GR-based parasitic and timing estimation and post-DR results during post-GR optimization, machine learning (ML)-based models are proposed.
arXiv Detail & Related papers (2023-05-11T16:01:23Z) - Test-Time Distribution Normalization for Contrastively Learned
Vision-language Models [39.66329310098645]
One of the most representative approaches proposed recently known as CLIP has garnered widespread adoption due to its effectiveness.
This paper reveals that the common downstream practice of taking a dot product is only a zeroth-order approximation of the optimization goal, resulting in a loss of information during test-time.
We propose Distribution Normalization (DN), where we approximate the mean representation of a batch of test samples and use such a mean to represent what would be analogous to negative samples in the InfoNCE loss.
arXiv Detail & Related papers (2023-02-22T01:14:30Z) - Generating Negative Samples for Sequential Recommendation [83.60655196391855]
We propose to Generate Negative Samples (items) for Sequential Recommendation (SR)
A negative item is sampled at each time step based on the current SR model's learned user preferences toward items.
Experiments on four public datasets verify the importance of providing high-quality negative samples for SR.
arXiv Detail & Related papers (2022-08-07T05:44:13Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Deep Feedback Inverse Problem Solver [141.26041463617963]
We present an efficient, effective, and generic approach towards solving inverse problems.
We leverage the feedback signal provided by the forward process and learn an iterative update model.
Our approach does not have any restrictions on the forward process; it does not require any prior knowledge either.
arXiv Detail & Related papers (2021-01-19T16:49:06Z) - Capturing Delayed Feedback in Conversion Rate Prediction via
Elapsed-Time Sampling [29.77426549280091]
Conversion rate (CVR) prediction is one of the most critical tasks for digital display advertising.
We propose Elapsed-Time Sampling Delayed Feedback Model (ES-DFM), which models the relationship between the observed conversion distribution and the true conversion distribution.
arXiv Detail & Related papers (2020-12-06T12:20:50Z) - Change Point Detection in Time Series Data using Autoencoders with a
Time-Invariant Representation [69.34035527763916]
Change point detection (CPD) aims to locate abrupt property changes in time series data.
Recent CPD methods demonstrated the potential of using deep learning techniques, but often lack the ability to identify more subtle changes in the autocorrelation statistics of the signal.
We employ an autoencoder-based methodology with a novel loss function, through which the used autoencoders learn a partially time-invariant representation that is tailored for CPD.
arXiv Detail & Related papers (2020-08-21T15:03:21Z) - A Feedback Shift Correction in Predicting Conversion Rates under Delayed
Feedback [6.38500614968955]
In display advertising, predicting the conversion rate is fundamental to estimating the value of displaying the advertisement.
There is a relatively long time delay between a click and its resultant conversion.
Because of the delayed feedback, some positive instances at the training period are labeled as negative.
arXiv Detail & Related papers (2020-02-06T02:05:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.