Robust Causal Graph Representation Learning against Confounding Effects
- URL: http://arxiv.org/abs/2208.08584v1
- Date: Thu, 18 Aug 2022 01:31:25 GMT
- Title: Robust Causal Graph Representation Learning against Confounding Effects
- Authors: Hang Gao, Jiangmeng Li, Wenwen Qiang, Lingyu Si, Bing Xu, Changwen
Zheng, Fuchun Sun
- Abstract summary: We propose Robust Causal Graph Representation Learning (RCGRL) to learn robust graph representations against confounding effects.
RCGRL introduces an active approach to generate instrumental variables under unconditional moment restrictions, which empowers the graph representation learning model to eliminate confounders.
- Score: 21.380907101361643
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The prevailing graph neural network models have achieved significant progress
in graph representation learning. However, in this paper, we uncover an
ever-overlooked phenomenon: the pre-trained graph representation learning model
tested with full graphs underperforms the model tested with well-pruned graphs.
This observation reveals that there exist confounders in graphs, which may
interfere with the model learning semantic information, and current graph
representation learning methods have not eliminated their influence. To tackle
this issue, we propose Robust Causal Graph Representation Learning (RCGRL) to
learn robust graph representations against confounding effects. RCGRL
introduces an active approach to generate instrumental variables under
unconditional moment restrictions, which empowers the graph representation
learning model to eliminate confounders, thereby capturing discriminative
information that is causally related to downstream predictions. We offer
theorems and proofs to guarantee the theoretical effectiveness of the proposed
approach. Empirically, we conduct extensive experiments on a synthetic dataset
and multiple benchmark datasets. The results demonstrate that compared with
state-of-the-art methods, RCGRL achieves better prediction performance and
generalization ability.
Related papers
- GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Mitigating the Performance Sacrifice in DP-Satisfied Federated Settings
through Graph Contrastive Learning [43.73753083910439]
We investigate how differential privacy (DP) can be implemented on graph edges and observe a performance decrease.
Inspired by this, we propose leveraging graph contrastive learning to alleviate the performance drop resulting from DP.
Extensive experiments conducted with four representative graph models on five widely used benchmark datasets show that contrastive learning indeed alleviates the models' DP-induced performance drops.
arXiv Detail & Related papers (2022-07-24T22:48:51Z) - Latent Augmentation For Better Graph Self-Supervised Learning [20.082614919182692]
We argue that predictive models weaponed with latent augmentations and powerful decoder could achieve comparable or even better representation power than contrastive models.
A novel graph decoder named Wiener Graph Deconvolutional Network is correspondingly designed to perform information reconstruction from augmented latent representations.
arXiv Detail & Related papers (2022-06-26T17:41:59Z) - Towards Explanation for Unsupervised Graph-Level Representation Learning [108.31036962735911]
Existing explanation methods focus on the supervised settings, eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored.
In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, textitUnsupervised Subgraph Information Bottleneck (USIB)
We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the robustness of representations benefit the fidelity of explanatory subgraphs.
arXiv Detail & Related papers (2022-05-20T02:50:15Z) - Self-Supervised Representation Learning via Latent Graph Prediction [41.64774038444827]
Self-supervised learning (SSL) of graph neural networks is emerging as a promising way of leveraging unlabeled data.
We propose the LaGraph, a theoretically grounded predictive SSL framework based on latent graph prediction.
Our experimental results demonstrate the superiority of LaGraph in performance and the robustness to decreasing of training sample size on both graph-level and node-level tasks.
arXiv Detail & Related papers (2022-02-16T21:10:33Z) - Learning Robust Representation through Graph Adversarial Contrastive
Learning [6.332560610460623]
Existing studies show that node representations generated by graph neural networks (GNNs) are vulnerable to adversarial attacks.
We propose a novel Graph Adversarial Contrastive Learning framework (GraphACL) by introducing adversarial augmentations into graph self-supervised learning.
arXiv Detail & Related papers (2022-01-31T07:07:51Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Quantifying Challenges in the Application of Graph Representation
Learning [0.0]
We provide an application oriented perspective to a set of popular embedding approaches.
We evaluate their representational power with respect to real-world graph properties.
Our results suggest that "one-to-fit-all" GRL approaches are hard to define in real-world scenarios.
arXiv Detail & Related papers (2020-06-18T03:19:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.