CL4CTR: A Contrastive Learning Framework for CTR Prediction
- URL: http://arxiv.org/abs/2212.00522v1
- Date: Thu, 1 Dec 2022 14:18:02 GMT
- Title: CL4CTR: A Contrastive Learning Framework for CTR Prediction
- Authors: Fangye Wang, Yingxu Wang, Dongsheng Li, Hansu Gu, Tun Lu, Peng Zhang,
Ning Gu
- Abstract summary: We introduce self-supervised learning to produce high-quality feature representations directly.
We propose a model-agnostic Contrastive Learning for CTR (CL4CTR) framework consisting of three self-supervised learning signals.
CL4CTR achieves the best performance on four datasets.
- Score: 14.968714571151509
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many Click-Through Rate (CTR) prediction works focused on designing advanced
architectures to model complex feature interactions but neglected the
importance of feature representation learning, e.g., adopting a plain embedding
layer for each feature, which results in sub-optimal feature representations
and thus inferior CTR prediction performance. For instance, low frequency
features, which account for the majority of features in many CTR tasks, are
less considered in standard supervised learning settings, leading to
sub-optimal feature representations. In this paper, we introduce
self-supervised learning to produce high-quality feature representations
directly and propose a model-agnostic Contrastive Learning for CTR (CL4CTR)
framework consisting of three self-supervised learning signals to regularize
the feature representation learning: contrastive loss, feature alignment, and
field uniformity. The contrastive module first constructs positive feature
pairs by data augmentation and then minimizes the distance between the
representations of each positive feature pair by the contrastive loss. The
feature alignment constraint forces the representations of features from the
same field to be close, and the field uniformity constraint forces the
representations of features from different fields to be distant. Extensive
experiments verify that CL4CTR achieves the best performance on four datasets
and has excellent effectiveness and compatibility with various representative
baselines.
Related papers
- TF4CTR: Twin Focus Framework for CTR Prediction via Adaptive Sample Differentiation [14.047096669510369]
This paper introduces a novel CTR prediction framework by integrating the plug-and-play Twin Focus (TF) Loss, Sample Selection Embedding Module (SSEM), and Dynamic Fusion Module (DFM)
Experiments on five real-world datasets confirm the effectiveness and compatibility of the framework.
arXiv Detail & Related papers (2024-05-06T05:22:40Z) - Learning the Unlearned: Mitigating Feature Suppression in Contrastive Learning [45.25602203155762]
Self-Supervised Contrastive Learning has proven effective in deriving high-quality representations from unlabeled data.
A major challenge that hinders both unimodal and multimodal contrastive learning is feature suppression.
We propose a novel model-agnostic Multistage Contrastive Learning framework.
arXiv Detail & Related papers (2024-02-19T04:13:33Z) - Unveiling Backbone Effects in CLIP: Exploring Representational Synergies
and Variances [49.631908848868505]
Contrastive Language-Image Pretraining (CLIP) stands out as a prominent method for image representation learning.
We investigate the differences in CLIP performance among various neural architectures.
We propose a simple, yet effective approach to combine predictions from multiple backbones, leading to a notable performance boost of up to 6.34%.
arXiv Detail & Related papers (2023-12-22T03:01:41Z) - Hierarchical Visual Primitive Experts for Compositional Zero-Shot
Learning [52.506434446439776]
Compositional zero-shot learning (CZSL) aims to recognize compositions with prior knowledge of known primitives (attribute and object)
We propose a simple and scalable framework called Composition Transformer (CoT) to address these issues.
Our method achieves SoTA performance on several benchmarks, including MIT-States, C-GQA, and VAW-CZSL.
arXiv Detail & Related papers (2023-08-08T03:24:21Z) - MAP: A Model-agnostic Pretraining Framework for Click-through Rate
Prediction [39.48740397029264]
We propose a Model-agnostic pretraining (MAP) framework that applies feature corruption and recovery on multi-field categorical data.
We derive two practical algorithms: masked feature prediction (RFD) and replaced feature detection (RFD)
arXiv Detail & Related papers (2023-08-03T12:55:55Z) - DELTA: Dynamic Embedding Learning with Truncated Conscious Attention for
CTR Prediction [61.68415731896613]
Click-Through Rate (CTR) prediction is a pivotal task in product and content recommendation.
We propose a model that enables Dynamic Embedding Learning with Truncated Conscious Attention for CTR prediction.
arXiv Detail & Related papers (2023-05-03T12:34:45Z) - Disentangled Representation Learning for Text-Video Retrieval [51.861423831566626]
Cross-modality interaction is a critical component in Text-Video Retrieval (TVR)
We study the interaction paradigm in depth, where we find that its computation can be split into two terms.
We propose a disentangled framework to capture a sequential and hierarchical representation.
arXiv Detail & Related papers (2022-03-14T13:55:33Z) - Masked Transformer for Neighhourhood-aware Click-Through Rate Prediction [74.52904110197004]
We propose Neighbor-Interaction based CTR prediction, which put this task into a Heterogeneous Information Network (HIN) setting.
In order to enhance the representation of the local neighbourhood, we consider four types of topological interaction among the nodes.
We conduct comprehensive experiments on two real world datasets and the experimental results show that our proposed method outperforms state-of-the-art CTR models significantly.
arXiv Detail & Related papers (2022-01-25T12:44:23Z) - AutoDis: Automatic Discretization for Embedding Numerical Features in
CTR Prediction [45.69943728028556]
Learning sophisticated feature interactions is crucial for Click-Through Rate (CTR) prediction in recommender systems.
Various deep CTR models follow an Embedding & Feature Interaction paradigm.
We propose AutoDis, a framework that discretizes features in numerical fields automatically and is optimized with CTR models in an end-to-end manner.
arXiv Detail & Related papers (2020-12-16T14:31:31Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.