Explicit Semantic Cross Feature Learning via Pre-trained Graph Neural
Networks for CTR Prediction
- URL: http://arxiv.org/abs/2105.07752v1
- Date: Mon, 17 May 2021 11:56:04 GMT
- Title: Explicit Semantic Cross Feature Learning via Pre-trained Graph Neural
Networks for CTR Prediction
- Authors: Feng Li, Bencheng Yan, Qingqing Long, Pengjie Wang, Wei Lin, Jian Xu
and Bo Zheng
- Abstract summary: Cross features play an important role in click-through rate (CTR) prediction.
Most of the existing methods adopt a DNN-based model to capture the cross features in an implicit manner.
We propose Pre-trained Cross Feature learning Graph Neural Networks (PCF-GNN), a GNN based pre-trained model aiming at generating cross features in an explicit fashion.
- Score: 14.270296688394762
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross features play an important role in click-through rate (CTR) prediction.
Most of the existing methods adopt a DNN-based model to capture the cross
features in an implicit manner. These implicit methods may lead to a
sub-optimized performance due to the limitation in explicit semantic modeling.
Although traditional statistical explicit semantic cross features can address
the problem in these implicit methods, it still suffers from some challenges,
including lack of generalization and expensive memory cost. Few works focus on
tackling these challenges. In this paper, we take the first step in learning
the explicit semantic cross features and propose Pre-trained Cross Feature
learning Graph Neural Networks (PCF-GNN), a GNN based pre-trained model aiming
at generating cross features in an explicit fashion. Extensive experiments are
conducted on both public and industrial datasets, where PCF-GNN shows
competence in both performance and memory-efficiency in various tasks.
Related papers
- Post-Hoc Robustness Enhancement in Graph Neural Networks with Conditional Random Fields [19.701706244728037]
Graph Neural Networks (GNNs) have been shown to be vulnerable to adversarial attacks.
This study introduces RobustCRF, a post-hoc approach aiming to enhance the robustness of GNNs at the inference stage.
arXiv Detail & Related papers (2024-11-08T08:26:42Z) - Kolmogorov-Arnold Graph Neural Networks [2.4005219869876453]
Graph neural networks (GNNs) excel in learning from network-like data but often lack interpretability.
We propose the Graph Kolmogorov-Arnold Network (GKAN) to enhance both accuracy and interpretability.
arXiv Detail & Related papers (2024-06-26T13:54:59Z) - Probabilistically Rewired Message-Passing Neural Networks [41.554499944141654]
Message-passing graph neural networks (MPNNs) emerged as powerful tools for processing graph-structured input.
MPNNs operate on a fixed input graph structure, ignoring potential noise and missing information.
We devise probabilistically rewired MPNNs (PR-MPNNs) which learn to add relevant edges while omitting less beneficial ones.
arXiv Detail & Related papers (2023-10-03T15:43:59Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - MARS: Meta-Learning as Score Matching in the Function Space [79.73213540203389]
We present a novel approach to extracting inductive biases from a set of related datasets.
We use functional Bayesian neural network inference, which views the prior as a process and performs inference in the function space.
Our approach can seamlessly acquire and represent complex prior knowledge by metalearning the score function of the data-generating process.
arXiv Detail & Related papers (2022-10-24T15:14:26Z) - Invertible Neural Networks for Graph Prediction [22.140275054568985]
In this work, we address conditional generation using deep invertible neural networks.
We adopt an end-to-end training approach since our objective is to address prediction and generation in the forward and backward processes at once.
arXiv Detail & Related papers (2022-06-02T17:28:33Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Continual Learning in Recurrent Neural Networks [67.05499844830231]
We evaluate the effectiveness of continual learning methods for processing sequential data with recurrent neural networks (RNNs)
We shed light on the particularities that arise when applying weight-importance methods, such as elastic weight consolidation, to RNNs.
We show that the performance of weight-importance methods is not directly affected by the length of the processed sequences, but rather by high working memory requirements.
arXiv Detail & Related papers (2020-06-22T10:05:12Z) - Towards an Efficient and General Framework of Robust Training for Graph
Neural Networks [96.93500886136532]
Graph Neural Networks (GNNs) have made significant advances on several fundamental inference tasks.
Despite GNNs' impressive performance, it has been observed that carefully crafted perturbations on graph structures lead them to make wrong predictions.
We propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs.
arXiv Detail & Related papers (2020-02-25T15:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.