A Feedback Shift Correction in Predicting Conversion Rates under Delayed
Feedback
- URL: http://arxiv.org/abs/2002.02068v1
- Date: Thu, 6 Feb 2020 02:05:07 GMT
- Title: A Feedback Shift Correction in Predicting Conversion Rates under Delayed
Feedback
- Authors: Shota Yasui, Gota Morishita, Komei Fujita, Masashi Shibata
- Abstract summary: In display advertising, predicting the conversion rate is fundamental to estimating the value of displaying the advertisement.
There is a relatively long time delay between a click and its resultant conversion.
Because of the delayed feedback, some positive instances at the training period are labeled as negative.
- Score: 6.38500614968955
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In display advertising, predicting the conversion rate, that is, the
probability that a user takes a predefined action on an advertiser's website,
such as purchasing goods is fundamental in estimating the value of displaying
the advertisement. However, there is a relatively long time delay between a
click and its resultant conversion. Because of the delayed feedback, some
positive instances at the training period are labeled as negative because some
conversions have not yet occurred when training data are gathered. As a result,
the conditional label distributions differ between the training data and the
production environment. This situation is referred to as a feedback shift. We
address this problem by using an importance weight approach typically used for
covariate shift correction. We prove its consistency for the feedback shift.
Results in both offline and online experiments show that our proposed method
outperforms the existing method.
Related papers
- DOTA: Distributional Test-Time Adaptation of Vision-Language Models [52.98590762456236]
Training-free test-time dynamic adapter (TDA) is a promising approach to address this issue.
We propose a simple yet effective method for DistributiOnal Test-time Adaptation (Dota)
Dota continually estimates the distributions of test samples, allowing the model to continually adapt to the deployment environment.
arXiv Detail & Related papers (2024-09-28T15:03:28Z) - Contrastive Learning with Negative Sampling Correction [52.990001829393506]
We propose a novel contrastive learning method named Positive-Unlabeled Contrastive Learning (PUCL)
PUCL treats the generated negative samples as unlabeled samples and uses information from positive samples to correct bias in contrastive loss.
PUCL can be applied to general contrastive learning problems and outperforms state-of-the-art methods on various image and graph classification tasks.
arXiv Detail & Related papers (2024-01-13T11:18:18Z) - Adapting to Continuous Covariate Shift via Online Density Ratio Estimation [64.8027122329609]
Dealing with distribution shifts is one of the central challenges for modern machine learning.
We propose an online method that can appropriately reuse historical information.
Our density ratio estimation method is proven to perform well by enjoying a dynamic regret bound.
arXiv Detail & Related papers (2023-02-06T04:03:33Z) - Generalized Delayed Feedback Model with Post-Click Information in
Recommender Systems [37.72697954740977]
We show that post-click user behaviors are also informative to conversion rate prediction and can be used to improve timeliness.
We propose a generalized delayed feedback model (GDFM) that unifies both post-click behaviors and early conversions as post-click information.
arXiv Detail & Related papers (2022-06-01T11:17:01Z) - Asymptotically Unbiased Estimation for Delayed Feedback Modeling via
Label Correction [14.462884375151045]
Delayed feedback is crucial for the conversion rate prediction in online advertising.
Previous delayed feedback modeling methods balance the trade-off between waiting for accurate labels and consuming fresh feedback.
We propose a new method, DElayed Feedback modeling with UnbiaSed Estimation, (DEFUSE), which aim to respectively correct the importance weights of the immediate positive, the fake negative, the real negative, and the delay positive samples.
arXiv Detail & Related papers (2022-02-14T03:31:09Z) - Real Negatives Matter: Continuous Training with Real Negatives for
Delayed Feedback Modeling [10.828167195122072]
We propose DElayed FEedback modeling with Real negatives (DEFER) method to address these issues.
The ingestion of real negatives ensures the observed feature distribution is equivalent to the actual distribution, thus reducing the bias.
DEFER have been deployed in the display advertising system of Alibaba, obtaining over 6.4% improvement on CVR in several scenarios.
arXiv Detail & Related papers (2021-04-29T05:37:34Z) - Reducing Representation Drift in Online Continual Learning [87.71558506591937]
We study the online continual learning paradigm, where agents must learn from a changing distribution with constrained memory and compute.
In this work we instead focus on the change in representations of previously observed data due to the introduction of previously unobserved class samples in the incoming data stream.
arXiv Detail & Related papers (2021-04-11T15:19:30Z) - Capturing Delayed Feedback in Conversion Rate Prediction via
Elapsed-Time Sampling [29.77426549280091]
Conversion rate (CVR) prediction is one of the most critical tasks for digital display advertising.
We propose Elapsed-Time Sampling Delayed Feedback Model (ES-DFM), which models the relationship between the observed conversion distribution and the true conversion distribution.
arXiv Detail & Related papers (2020-12-06T12:20:50Z) - AdCo: Adversarial Contrast for Efficient Learning of Unsupervised
Representations from Self-Trained Negative Adversaries [55.059844800514774]
We propose an Adrial Contrastive (AdCo) model to train representations that are hard to discriminate against positive queries.
Experiment results demonstrate the proposed Adrial Contrastive (AdCo) model achieves superior performances.
arXiv Detail & Related papers (2020-11-17T05:45:46Z) - Learning Classifiers under Delayed Feedback with a Time Window
Assumption [16.269923100433235]
We consider training a binary classifier under delayed feedback (emphDF learning)
We initially receive negative samples; subsequently, some samples among them change to positive.
Owing to the delayed feedback, naive classification of the positive and negative samples returns a biased classifier.
arXiv Detail & Related papers (2020-09-28T06:20:24Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.