Hybrid Contrastive Constraints for Multi-Scenario Ad Ranking
- URL: http://arxiv.org/abs/2302.02636v1
- Date: Mon, 6 Feb 2023 09:15:39 GMT
- Title: Hybrid Contrastive Constraints for Multi-Scenario Ad Ranking
- Authors: Shanlei Mu, Penghui Wei, Wayne Xin Zhao, Shaoguo Liu, Liang Wang, Bo
Zheng
- Abstract summary: Multi-scenario ad ranking aims at leveraging the data from multiple domains or channels for training a unified ranking model.
We propose a Hybrid Contrastive Constrained approach (HC2) for multi-scenario ad ranking.
- Score: 38.666592866591344
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-scenario ad ranking aims at leveraging the data from multiple domains
or channels for training a unified ranking model to improve the performance at
each individual scenario. Although the research on this task has made important
progress, it still lacks the consideration of cross-scenario relations, thus
leading to limitation in learning capability and difficulty in interrelation
modeling. In this paper, we propose a Hybrid Contrastive Constrained approach
(HC^2) for multi-scenario ad ranking. To enhance the modeling of data
interrelation, we elaborately design a hybrid contrastive learning approach to
capture commonalities and differences among multiple scenarios. The core of our
approach consists of two elaborated contrastive losses, namely generalized and
individual contrastive loss, which aim at capturing common knowledge and
scenario-specific knowledge, respectively. To adapt contrastive learning to the
complex multi-scenario setting, we propose a series of important improvements.
For generalized contrastive loss, we enhance contrastive learning by extending
the contrastive samples (label-aware and diffusion noise enhanced contrastive
samples) and reweighting the contrastive samples (reciprocal similarity
weighting). For individual contrastive loss, we use the strategies of
dropout-based augmentation and {cross-scenario encoding} for generating
meaningful positive and negative contrastive samples, respectively. Extensive
experiments on both offline evaluation and online test have demonstrated the
effectiveness of the proposed HC$^2$ by comparing it with a number of
competitive baselines.
Related papers
- Ensemble Adversarial Defense via Integration of Multiple Dispersed Low Curvature Models [7.8245455684263545]
In this work, we aim to enhance ensemble diversity by reducing attack transferability.
We identify second-order gradients, which depict the loss curvature, as a key factor in adversarial robustness.
We introduce a novel regularizer to train multiple more-diverse low-curvature network models.
arXiv Detail & Related papers (2024-03-25T03:44:36Z) - Advancing Relation Extraction through Language Probing with Exemplars
from Set Co-Expansion [1.450405446885067]
Relation Extraction (RE) is a pivotal task in automatically extracting structured information from unstructured text.
We present a multi-faceted approach that integrates representative examples and through co-set expansion.
Our method achieves an observed margin of at least 1 percent improvement in accuracy in most settings.
arXiv Detail & Related papers (2023-08-18T00:56:35Z) - Implicit Counterfactual Data Augmentation for Robust Learning [24.795542869249154]
This study proposes an Implicit Counterfactual Data Augmentation method to remove spurious correlations and make stable predictions.
Experiments have been conducted across various biased learning scenarios covering both image and text datasets.
arXiv Detail & Related papers (2023-04-26T10:36:40Z) - Rethinking Prototypical Contrastive Learning through Alignment,
Uniformity and Correlation [24.794022951873156]
We propose to learn Prototypical representation through Alignment, Uniformity and Correlation (PAUC)
Specifically, the ordinary ProtoNCE loss is revised with: (1) an alignment loss that pulls embeddings from positive prototypes together; (2) a loss that distributes the prototypical level features uniformly; (3) a correlation loss that increases the diversity and discriminability between prototypical level features.
arXiv Detail & Related papers (2022-10-18T22:33:12Z) - Contrastive Principal Component Learning: Modeling Similarity by
Augmentation Overlap [50.48888534815361]
We propose a novel Contrastive Principal Component Learning (CPCL) method composed of a contrastive-like loss and an on-the-fly projection loss.
By CPCL, the learned low-dimensional embeddings theoretically preserve the similarity of augmentation distribution between samples.
arXiv Detail & Related papers (2022-06-01T13:03:58Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Harnessing Perceptual Adversarial Patches for Crowd Counting [92.79051296850405]
Crowd counting is vulnerable to adversarial examples in the physical world.
This paper proposes the Perceptual Adrial Patch (PAP) generation framework to learn the shared perceptual features between models.
arXiv Detail & Related papers (2021-09-16T13:51:39Z) - Contrastive Learning based Hybrid Networks for Long-Tailed Image
Classification [31.647639786095993]
We propose a novel hybrid network structure composed of a supervised contrastive loss to learn image representations and a cross-entropy loss to learn classifiers.
Experiments on three long-tailed classification datasets demonstrate the advantage of the proposed contrastive learning based hybrid networks in long-tailed classification.
arXiv Detail & Related papers (2021-03-26T05:22:36Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z) - Unleashing the Power of Contrastive Self-Supervised Visual Models via
Contrast-Regularized Fine-Tuning [94.35586521144117]
We investigate whether applying contrastive learning to fine-tuning would bring further benefits.
We propose Contrast-regularized tuning (Core-tuning), a novel approach for fine-tuning contrastive self-supervised visual models.
arXiv Detail & Related papers (2021-02-12T16:31:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.