Learning Classifiers under Delayed Feedback with a Time Window
Assumption
- URL: http://arxiv.org/abs/2009.13092v2
- Date: Fri, 10 Jun 2022 05:56:39 GMT
- Title: Learning Classifiers under Delayed Feedback with a Time Window
Assumption
- Authors: Masahiro Kato and Shota Yasui
- Abstract summary: We consider training a binary classifier under delayed feedback (emphDF learning)
We initially receive negative samples; subsequently, some samples among them change to positive.
Owing to the delayed feedback, naive classification of the positive and negative samples returns a biased classifier.
- Score: 16.269923100433235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider training a binary classifier under delayed feedback (\emph{DF
learning}). For example, in the conversion prediction in online ads, we
initially receive negative samples that clicked the ads but did not buy an
item; subsequently, some samples among them buy an item then change to
positive. In the setting of DF learning, we observe samples over time, then
learn a classifier at some point. We initially receive negative samples;
subsequently, some samples among them change to positive. This problem is
conceivable in various real-world applications such as online advertisements,
where the user action takes place long after the first click. Owing to the
delayed feedback, naive classification of the positive and negative samples
returns a biased classifier. One solution is to use samples that have been
observed for more than a certain time window assuming these samples are
correctly labeled. However, existing studies reported that simply using a
subset of all samples based on the time window assumption does not perform
well, and that using all samples along with the time window assumption improves
empirical performance. We extend these existing studies and propose a method
with the unbiased and convex empirical risk that is constructed from all
samples under the time window assumption. To demonstrate the soundness of the
proposed method, we provide experimental results on a synthetic and open
dataset that is the real traffic log datasets in online advertising.
Related papers
- DOTA: Distributional Test-Time Adaptation of Vision-Language Models [52.98590762456236]
Training-free test-time dynamic adapter (TDA) is a promising approach to address this issue.
We propose a simple yet effective method for DistributiOnal Test-time Adaptation (Dota)
Dota continually estimates the distributions of test samples, allowing the model to continually adapt to the deployment environment.
arXiv Detail & Related papers (2024-09-28T15:03:28Z) - Which Pretrain Samples to Rehearse when Finetuning Pretrained Models? [60.59376487151964]
Fine-tuning pretrained models on specific tasks is now the de facto approach for text and vision tasks.
A known pitfall of this approach is the forgetting of pretraining knowledge that happens during finetuning.
We propose a novel sampling scheme, mix-cd, that identifies and prioritizes samples that actually face forgetting.
arXiv Detail & Related papers (2024-02-12T22:32:12Z) - Better Sampling of Negatives for Distantly Supervised Named Entity
Recognition [39.264878763160766]
We propose a simple and straightforward approach for selecting the top negative samples that have high similarities with all the positive samples for training.
Our method achieves consistent performance improvements on four distantly supervised NER datasets.
arXiv Detail & Related papers (2023-05-22T15:35:39Z) - Test-Time Distribution Normalization for Contrastively Learned
Vision-language Models [39.66329310098645]
One of the most representative approaches proposed recently known as CLIP has garnered widespread adoption due to its effectiveness.
This paper reveals that the common downstream practice of taking a dot product is only a zeroth-order approximation of the optimization goal, resulting in a loss of information during test-time.
We propose Distribution Normalization (DN), where we approximate the mean representation of a batch of test samples and use such a mean to represent what would be analogous to negative samples in the InfoNCE loss.
arXiv Detail & Related papers (2023-02-22T01:14:30Z) - Generating Negative Samples for Sequential Recommendation [83.60655196391855]
We propose to Generate Negative Samples (items) for Sequential Recommendation (SR)
A negative item is sampled at each time step based on the current SR model's learned user preferences toward items.
Experiments on four public datasets verify the importance of providing high-quality negative samples for SR.
arXiv Detail & Related papers (2022-08-07T05:44:13Z) - ReSmooth: Detecting and Utilizing OOD Samples when Training with Data
Augmentation [57.38418881020046]
Recent DA techniques always meet the need for diversity in augmented training samples.
An augmentation strategy that has a high diversity usually introduces out-of-distribution (OOD) augmented samples.
We propose ReSmooth, a framework that firstly detects OOD samples in augmented samples and then leverages them.
arXiv Detail & Related papers (2022-05-25T09:29:27Z) - Asymptotically Unbiased Estimation for Delayed Feedback Modeling via
Label Correction [14.462884375151045]
Delayed feedback is crucial for the conversion rate prediction in online advertising.
Previous delayed feedback modeling methods balance the trade-off between waiting for accurate labels and consuming fresh feedback.
We propose a new method, DElayed Feedback modeling with UnbiaSed Estimation, (DEFUSE), which aim to respectively correct the importance weights of the immediate positive, the fake negative, the real negative, and the delay positive samples.
arXiv Detail & Related papers (2022-02-14T03:31:09Z) - Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated
Label Mixing [104.630875328668]
Mixup scheme suggests mixing a pair of samples to create an augmented training sample.
We present a novel, yet simple Mixup-variant that captures the best of both worlds.
arXiv Detail & Related papers (2021-12-16T11:27:48Z) - Rethinking InfoNCE: How Many Negative Samples Do You Need? [54.146208195806636]
We study how many negative samples are optimal for InfoNCE in different scenarios via a semi-quantitative theoretical framework.
We estimate the optimal negative sampling ratio using the $K$ value that maximizes the training effectiveness function.
arXiv Detail & Related papers (2021-05-27T08:38:29Z) - Sampler Design for Implicit Feedback Data by Noisy-label Robust Learning [32.76804332450971]
We design an adaptive sampler based on noisy-label robust learning for implicit feedback data.
We predict users' preferences with the model and learn it by maximizing likelihood of observed data labels.
We then consider the risk of these noisy labels, and propose a Noisy-label Robust BPO.
arXiv Detail & Related papers (2020-06-28T05:31:53Z) - A Feedback Shift Correction in Predicting Conversion Rates under Delayed
Feedback [6.38500614968955]
In display advertising, predicting the conversion rate is fundamental to estimating the value of displaying the advertisement.
There is a relatively long time delay between a click and its resultant conversion.
Because of the delayed feedback, some positive instances at the training period are labeled as negative.
arXiv Detail & Related papers (2020-02-06T02:05:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.