FakeEdge: Alleviate Dataset Shift in Link Prediction
- URL: http://arxiv.org/abs/2211.15899v1
- Date: Tue, 29 Nov 2022 03:36:01 GMT
- Title: FakeEdge: Alleviate Dataset Shift in Link Prediction
- Authors: Kaiwen Dong, Yijun Tian, Zhichun Guo, Yang Yang, Nitesh V. Chawla
- Abstract summary: In a link prediction task, links in the training set are always present while ones in the testing set are not yet formed, resulting in a discrepancy of the connectivity pattern and bias of the learned representation.
We propose FakeEdge, a model-agnostic technique, to address the problem by mitigating the graph topological gap between training and testing sets.
- Score: 16.161812856581676
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Link prediction is a crucial problem in graph-structured data. Due to the
recent success of graph neural networks (GNNs), a variety of GNN-based models
were proposed to tackle the link prediction task. Specifically, GNNs leverage
the message passing paradigm to obtain node representation, which relies on
link connectivity. However, in a link prediction task, links in the training
set are always present while ones in the testing set are not yet formed,
resulting in a discrepancy of the connectivity pattern and bias of the learned
representation. It leads to a problem of dataset shift which degrades the model
performance. In this paper, we first identify the dataset shift problem in the
link prediction task and provide theoretical analyses on how existing link
prediction methods are vulnerable to it. We then propose FakeEdge, a
model-agnostic technique, to address the problem by mitigating the graph
topological gap between training and testing sets. Extensive experiments
demonstrate the applicability and superiority of FakeEdge on multiple datasets
across various domains.
Related papers
- PULL: PU-Learning-based Accurate Link Prediction [12.8532740199204]
Given an edge-incomplete graph, how can we accurately find the missing links?
We propose PULL (PU-Learning-based Link predictor), an accurate link prediction method based on the positive-unlabeled (PU) learning.
PULL consistently outperforms the baselines for predicting links in edge-incomplete graphs.
arXiv Detail & Related papers (2024-05-20T09:47:22Z) - Link Prediction without Graph Neural Networks [7.436429318051601]
Link prediction is a fundamental task in many graph applications.
Graph Neural Networks (GNNs) have become the predominant framework for link prediction.
We propose Gelato, a novel framework that applies a topological-centric framework to a graph enhanced by attribute information via graph learning.
arXiv Detail & Related papers (2023-05-23T03:59:21Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - Text Representation Enrichment Utilizing Graph based Approaches: Stock
Market Technical Analysis Case Study [0.0]
We propose a transductive hybrid approach composed of an unsupervised node representation learning model followed by a node classification/edge prediction model.
The proposed model is developed to classify stock market technical analysis reports, which to our knowledge is the first work in this domain.
arXiv Detail & Related papers (2022-11-29T11:26:08Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Invertible Neural Networks for Graph Prediction [22.140275054568985]
In this work, we address conditional generation using deep invertible neural networks.
We adopt an end-to-end training approach since our objective is to address prediction and generation in the forward and backward processes at once.
arXiv Detail & Related papers (2022-06-02T17:28:33Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Training Robust Graph Neural Networks with Topology Adaptive Edge
Dropping [116.26579152942162]
Graph neural networks (GNNs) are processing architectures that exploit graph structural information to model representations from network data.
Despite their success, GNNs suffer from sub-optimal generalization performance given limited training data.
This paper proposes Topology Adaptive Edge Dropping to improve generalization performance and learn robust GNN models.
arXiv Detail & Related papers (2021-06-05T13:20:36Z) - An Introduction to Robust Graph Convolutional Networks [71.68610791161355]
We propose a novel Robust Graph Convolutional Neural Networks for possible erroneous single-view or multi-view data.
By incorporating an extra layers via Autoencoders into traditional graph convolutional networks, we characterize and handle typical error models explicitly.
arXiv Detail & Related papers (2021-03-27T04:47:59Z) - Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph
Link Prediction [69.1473775184952]
We introduce a realistic problem of few-shot out-of-graph link prediction.
We tackle this problem with a novel transductive meta-learning framework.
We validate our model on multiple benchmark datasets for knowledge graph completion and drug-drug interaction prediction.
arXiv Detail & Related papers (2020-06-11T17:42:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.