Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College
- URL: http://arxiv.org/abs/2006.06469v2
- Date: Wed, 14 Oct 2020 15:51:30 GMT
- Title: Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College
- Authors: Chen Li, Xutan Peng, Hao Peng, Jianxin Li, Lihong Wang, Philip S. Yu,
Lifang He
- Abstract summary: We propose a novel pre-processing technique, namely ELectoral COllege (ELCO), which automatically expands new nodes and edges to refine the label similarity within a dense subgraph.
In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the state-of-the-art.
- Score: 80.67842220664231
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, graph-based algorithms have drawn much attention because of their
impressive success in semi-supervised setups. For better model performance,
previous studies learn to transform the topology of the input graph. However,
these works only focus on optimizing the original nodes and edges, leaving the
direction of augmenting existing data unexplored. In this paper, by simulating
the generation process of graph signals, we propose a novel heuristic
pre-processing technique, namely ELectoral COllege (ELCO), which automatically
expands new nodes and edges to refine the label similarity within a dense
subgraph. Substantially enlarging the original training set with high-quality
generated labeled data, our framework can effectively benefit downstream
models. To justify the generality and practicality of ELCO, we couple it with
the popular Graph Convolution Network and Graph Attention Network to perform
extensive evaluations on three standard datasets. In all setups tested, our
method boosts the average score of base models by a large margin of 4.7 points,
as well as consistently outperforms the state-of-the-art. We release our code
and data on https://github.com/RingBDStack/ELCO to guarantee reproducibility.
Related papers
- Amplify Graph Learning for Recommendation via Sparsity Completion [16.32861024767423]
Graph learning models have been widely deployed in collaborative filtering (CF) based recommendation systems.
Due to the issue of data sparsity, the graph structure of the original input lacks potential positive preference edges.
We propose an Amplify Graph Learning framework based on Sparsity Completion (called AGL-SC)
arXiv Detail & Related papers (2024-06-27T08:26:20Z) - GC4NC: A Benchmark Framework for Graph Condensation on Node Classification with New Insights [30.796414860754837]
Graph condensation (GC) is an emerging technique designed to learn a significantly smaller graph that retains the essential information of the original graph.
This paper introduces textbfGC4NC, a comprehensive framework for evaluating diverse GC methods on node classification.
Our systematic evaluation offers novel insights into how condensed graphs behave and the critical design choices that drive their success.
arXiv Detail & Related papers (2024-06-24T15:17:49Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Pseudoinverse Graph Convolutional Networks: Fast Filters Tailored for
Large Eigengaps of Dense Graphs and Hypergraphs [0.0]
Graph Convolutional Networks (GCNs) have proven to be successful tools for semi-supervised classification on graph-based datasets.
We propose a new GCN variant whose three-part filter space is targeted at dense graphs.
arXiv Detail & Related papers (2020-08-03T08:48:41Z) - Adaptive Graph Encoder for Attributed Graph Embedding [36.06427854846497]
Attributed graph embedding learns vector representations from graph topology and node features.
We propose Adaptive Graph (AGE), a novel attributed graph embedding framework.
We conduct experiments using four public benchmark datasets to validate AGE on node clustering and link prediction tasks.
arXiv Detail & Related papers (2020-07-03T10:20:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.