Causality-based CTR Prediction using Graph Neural Networks
- URL: http://arxiv.org/abs/2301.12762v1
- Date: Mon, 30 Jan 2023 10:16:40 GMT
- Title: Causality-based CTR Prediction using Graph Neural Networks
- Authors: Panyu Zhai, Yanwu Yang and Chunjie Zhang
- Abstract summary: This paper develops a causality-based CTR prediction model in the graph neural networks framework (Causal-GNN)
It integrates representations of feature graph, user graph and ad graph in the context of online advertising.
Experiments conducted on three public datasets demonstrate the superiority of Causal-GNN in AUC and Logloss.
- Score: 14.93804796744474
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As a prevalent problem in online advertising, CTR prediction has attracted
plentiful attention from both academia and industry. Recent studies have been
reported to establish CTR prediction models in the graph neural networks (GNNs)
framework. However, most of GNNs-based models handle feature interactions in a
complete graph, while ignoring causal relationships among features, which
results in a huge drop in the performance on out-of-distribution data. This
paper is dedicated to developing a causality-based CTR prediction model in the
GNNs framework (Causal-GNN) integrating representations of feature graph, user
graph and ad graph in the context of online advertising. In our model, a
structured representation learning method (GraphFwFM) is designed to capture
high-order representations on feature graph based on causal discovery among
field features in gated graph neural networks (GGNNs), and GraphSAGE is
employed to obtain graph representations of users and ads. Experiments
conducted on three public datasets demonstrate the superiority of Causal-GNN in
AUC and Logloss and the effectiveness of GraphFwFM in capturing high-order
representations on causal feature graph.
Related papers
- DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - Self-supervision meets kernel graph neural models: From architecture to
augmentations [36.388069423383286]
We improve the design and learning of kernel graph neural networks (KGNNs)
We develop a novel structure-preserving graph data augmentation method called latent graph augmentation (LGA)
Our proposed model achieves competitive performance comparable to or sometimes outperforming state-of-the-art graph representation learning frameworks.
arXiv Detail & Related papers (2023-10-17T14:04:22Z) - The Expressive Power of Graph Neural Networks: A Survey [9.08607528905173]
We conduct a first survey for models for enhancing expressive power under different forms of definition.
The models are reviewed based on three categories, i.e., Graph feature enhancement, Graph topology enhancement, and GNNs architecture enhancement.
arXiv Detail & Related papers (2023-08-16T09:12:21Z) - INFLECT-DGNN: Influencer Prediction with Dynamic Graph Neural Networks [2.8497910326197586]
We introduce INFLECT-DGNN, a new framework for INFLuencer prEdiCTion with Dynamic Graph Neural Networks (GNN) and Recurrent Neural Networks (RNN)
Our results show how using RNN to encode temporal attributes alongside GNNs significantly improves predictive performance.
arXiv Detail & Related papers (2023-07-16T19:04:48Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Graph Generative Model for Benchmarking Graph Neural Networks [73.11514658000547]
We introduce a novel graph generative model that learns and reproduces the distribution of real-world graphs in a privacy-controlled way.
Our model can successfully generate privacy-controlled, synthetic substitutes of large-scale real-world graphs that can be effectively used to benchmark GNN models.
arXiv Detail & Related papers (2022-07-10T06:42:02Z) - Distribution Preserving Graph Representation Learning [11.340722297341788]
Graph neural network (GNN) is effective to model graphs for distributed representations of nodes and an entire graph.
We propose Distribution Preserving GNN (DP-GNN) - a GNN framework that can improve the generalizability of expressive GNN models.
We evaluate the proposed DP-GNN framework on multiple benchmark datasets for graph classification tasks.
arXiv Detail & Related papers (2022-02-27T19:16:26Z) - Stability and Generalization Capabilities of Message Passing Graph
Neural Networks [4.691259009382681]
We study the generalization capabilities of MPNNs in graph classification.
We derive a non-asymptotic bound on the generalization gap between the empirical and statistical loss.
This is proven by showing that a MPNN, applied on a graph, approximates the MPNN applied on the geometric model that the graph discretizes.
arXiv Detail & Related papers (2022-02-01T18:37:53Z) - OOD-GNN: Out-of-Distribution Generalized Graph Neural Network [73.67049248445277]
Graph neural networks (GNNs) have achieved impressive performance when testing and training graph data come from identical distribution.
Existing GNNs lack out-of-distribution generalization abilities so that their performance substantially degrades when there exist distribution shifts between testing and training graph data.
We propose an out-of-distribution generalized graph neural network (OOD-GNN) for achieving satisfactory performance on unseen testing graphs that have different distributions with training graphs.
arXiv Detail & Related papers (2021-12-07T16:29:10Z) - Generalizing Graph Neural Networks on Out-Of-Distribution Graphs [51.33152272781324]
Graph Neural Networks (GNNs) are proposed without considering the distribution shifts between training and testing graphs.
In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation.
We propose a general causal representation framework, called StableGNN, to eliminate the impact of spurious correlations.
arXiv Detail & Related papers (2021-11-20T18:57:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.