Contrastive Learning for Conversion Rate Prediction
- URL: http://arxiv.org/abs/2307.05974v1
- Date: Wed, 12 Jul 2023 07:42:52 GMT
- Title: Contrastive Learning for Conversion Rate Prediction
- Authors: Wentao Ouyang, Rui Dong, Xiuwu Zhang, Chaofeng Guo, Jinmei Luo,
Xiangzheng Liu, Yanlong Du
- Abstract summary: We propose Contrastive Learning for CVR prediction (CL4CVR) framework.
It associates the supervised CVR prediction task with a contrastive learning task, which can learn better data representations.
Experimental results on two real-world conversion datasets demonstrate the superior performance of CL4CVR.
- Score: 6.607531486024888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conversion rate (CVR) prediction plays an important role in advertising
systems. Recently, supervised deep neural network-based models have shown
promising performance in CVR prediction. However, they are data hungry and
require an enormous amount of training data. In online advertising systems,
although there are millions to billions of ads, users tend to click only a
small set of them and to convert on an even smaller set. This data sparsity
issue restricts the power of these deep models. In this paper, we propose the
Contrastive Learning for CVR prediction (CL4CVR) framework. It associates the
supervised CVR prediction task with a contrastive learning task, which can
learn better data representations exploiting abundant unlabeled data and
improve the CVR prediction performance. To tailor the contrastive learning task
to the CVR prediction problem, we propose embedding masking (EM), rather than
feature masking, to create two views of augmented samples. We also propose a
false negative elimination (FNE) component to eliminate samples with the same
feature as the anchor sample, to account for the natural property in user
behavior data. We further propose a supervised positive inclusion (SPI)
component to include additional positive samples for each anchor sample, in
order to make full use of sparse but precious user conversion events.
Experimental results on two real-world conversion datasets demonstrate the
superior performance of CL4CVR. The source code is available at
https://github.com/DongRuiHust/CL4CVR.
Related papers
- RAT: Retrieval-Augmented Transformer for Click-Through Rate Prediction [68.34355552090103]
This paper develops a Retrieval-Augmented Transformer (RAT), aiming to acquire fine-grained feature interactions within and across samples.
We then build Transformer layers with cascaded attention to capture both intra- and cross-sample feature interactions.
Experiments on real-world datasets substantiate the effectiveness of RAT and suggest its advantage in long-tail scenarios.
arXiv Detail & Related papers (2024-04-02T19:14:23Z) - MAP: A Model-agnostic Pretraining Framework for Click-through Rate
Prediction [39.48740397029264]
We propose a Model-agnostic pretraining (MAP) framework that applies feature corruption and recovery on multi-field categorical data.
We derive two practical algorithms: masked feature prediction (RFD) and replaced feature detection (RFD)
arXiv Detail & Related papers (2023-08-03T12:55:55Z) - Click-Conversion Multi-Task Model with Position Bias Mitigation for
Sponsored Search in eCommerce [51.211924408864355]
We propose two position-bias-free prediction models: Position-Aware Click-Conversion (PACC) and PACC via Position Embedding (PACC-PE)
Experiments on the E-commerce sponsored product search dataset show that our proposed models have better ranking effectiveness and can greatly alleviate position bias in both CTR and CVR prediction.
arXiv Detail & Related papers (2023-07-29T19:41:16Z) - It Takes Two: Masked Appearance-Motion Modeling for Self-supervised
Video Transformer Pre-training [76.69480467101143]
Self-supervised video transformer pre-training has recently benefited from the mask-and-predict pipeline.
We explicitly investigate motion cues in videos as extra prediction target and propose our Masked Appearance-Motion Modeling framework.
Our method learns generalized video representations and achieves 82.3% on Kinects-400, 71.3% on Something-Something V2, 91.5% on UCF101, and 62.5% on HMDB51.
arXiv Detail & Related papers (2022-10-11T08:05:18Z) - Large-Margin Representation Learning for Texture Classification [67.94823375350433]
This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification.
The experimental results on texture and histopathologic image datasets have shown that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence when compared to equivalent CNNs.
arXiv Detail & Related papers (2022-06-17T04:07:45Z) - Conversion Rate Prediction via Meta Learning in Small-Scale
Recommendation Scenarios [17.02759665047561]
We propose a novel CVR method named MetaCVR from a perspective of meta learning to address the Data Distribution Fluctuation (DDF) issue.
To the best of our knowledge, this is the first study of CVR prediction targeting the DDF issue in small-scale recommendation scenarios.
arXiv Detail & Related papers (2021-12-27T16:05:42Z) - Quantum-Assisted Support Vector Regression for Detecting Facial
Landmarks [0.0]
We devise algorithms, namely simulated and quantum-classical hybrid, for training two SVR models.
We compare their empirical performances against the SVR implementation of Python's scikit-learn package.
Our work is a proof-of-concept example for applying quantu-assisted SVR to a supervised learning task.
arXiv Detail & Related papers (2021-11-17T18:57:10Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Delayed Feedback Modeling for the Entire Space Conversion Rate
Prediction [15.579755993971657]
Estimating post-click conversion rate (CVR) accurately is crucial in E-commerce.
We propose a novel neural network framework ESDF to tackle the above three challenges simultaneously.
arXiv Detail & Related papers (2020-11-24T01:14:03Z) - LT4REC:A Lottery Ticket Hypothesis Based Multi-task Practice for Video
Recommendation System [2.7174057828883504]
Click-through rate prediction (CTR) and post-click conversion rate prediction (CVR) play key roles across all industrial ranking systems.
In this paper, we model CVR in a brand-new method by adopting the lottery-ticket-hypothesis-based sparse sharing multi-task learning.
Experiments on the dataset gathered from traffic logs of Tencent video's recommendation system demonstrate that sparse sharing in the CVR model significantly outperforms competitive methods.
arXiv Detail & Related papers (2020-08-22T16:48:08Z) - Deep Learning for Content-based Personalized Viewport Prediction of
360-Degree VR Videos [72.08072170033054]
In this paper, a deep learning network is introduced to leverage position data as well as video frame content to predict future head movement.
For optimizing data input into this neural network, data sample rate, reduced data, and long-period prediction length are also explored for this model.
arXiv Detail & Related papers (2020-03-01T07:31:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.