AutoGEL: An Automated Graph Neural Network with Explicit Link
Information
- URL: http://arxiv.org/abs/2112.01064v1
- Date: Thu, 2 Dec 2021 09:09:18 GMT
- Title: AutoGEL: An Automated Graph Neural Network with Explicit Link
Information
- Authors: Zhili Wang, Shimin Di, Lei Chen
- Abstract summary: We present a novel AutoGNN work that explicitly models the link information, abbreviated to AutoGEL.
In such a way, AutoGEL can handle the link prediction task and improve the performance of AutoGNNs on the node classification and graph classification task.
- Score: 7.525545233605658
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recently, Graph Neural Networks (GNNs) have gained popularity in a variety of
real-world scenarios. Despite the great success, the architecture design of
GNNs heavily relies on manual labor. Thus, automated graph neural network
(AutoGNN) has attracted interest and attention from the research community,
which makes significant performance improvements in recent years. However,
existing AutoGNN works mainly adopt an implicit way to model and leverage the
link information in the graphs, which is not well regularized to the link
prediction task on graphs, and limits the performance of AutoGNN for other
graph tasks. In this paper, we present a novel AutoGNN work that explicitly
models the link information, abbreviated to AutoGEL. In such a way, AutoGEL can
handle the link prediction task and improve the performance of AutoGNNs on the
node classification and graph classification task. Specifically, AutoGEL
proposes a novel search space containing various design dimensions at both
intra-layer and inter-layer designs and adopts a more robust differentiable
search algorithm to further improve efficiency and effectiveness. Experimental
results on benchmark data sets demonstrate the superiority of AutoGEL on
several tasks.
Related papers
- GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels [81.93520935479984]
We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
arXiv Detail & Related papers (2023-10-23T05:51:59Z) - LazyGNN: Large-Scale Graph Neural Networks via Lazy Propagation [51.552170474958736]
We propose to capture long-distance dependency in graphs by shallower models instead of deeper models, which leads to a much more efficient model, LazyGNN, for graph representation learning.
LazyGNN is compatible with existing scalable approaches (such as sampling methods) for further accelerations through the development of mini-batch LazyGNN.
Comprehensive experiments demonstrate its superior prediction performance and scalability on large-scale benchmarks.
arXiv Detail & Related papers (2023-02-03T02:33:07Z) - Graph Property Prediction on Open Graph Benchmark: A Winning Solution by
Graph Neural Architecture Search [37.89305885538052]
We design a graph neural network framework for graph classification task by introducing PAS(Pooling Architecture Search)
We improve it based on the GNN topology design method F2GNN to further improve the performance of the model in the graph property prediction task.
It is proved that the NAS method has high generalization ability for multiple tasks and the advantage of our method in processing graph property prediction tasks.
arXiv Detail & Related papers (2022-07-13T08:17:48Z) - Automatic Relation-aware Graph Network Proliferation [182.30735195376792]
We propose Automatic Relation-aware Graph Network Proliferation (ARGNP) for efficiently searching GNNs.
These operations can extract hierarchical node/relational information and provide anisotropic guidance for message passing on a graph.
Experiments on six datasets for four graph learning tasks demonstrate that GNNs produced by our method are superior to the current state-of-the-art hand-crafted and search-based GNNs.
arXiv Detail & Related papers (2022-05-31T10:38:04Z) - Attention-Based Recommendation On Graphs [9.558392439655012]
Graph Neural Networks (GNN) have shown remarkable performance in different tasks.
In this study, we propose GARec as a model-based recommender system.
The presented method outperforms existing model-based, non-graph neural networks and graph neural networks in different MovieLens datasets.
arXiv Detail & Related papers (2022-01-04T21:02:02Z) - AutoHEnsGNN: Winning Solution to AutoGraph Challenge for KDD Cup 2020 [29.511523832243046]
We present AutoHEnsGNN, a framework to build effective and robust models for graph tasks without any human intervention.
AutoHEnsGNN won first place in the AutoGraph Challenge for KDD Cup 2020.
arXiv Detail & Related papers (2021-11-25T07:23:44Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Rethinking Graph Neural Network Search from Message-passing [120.62373472087651]
This paper proposes Graph Neural Architecture Search (GNAS) with novel-designed search space.
We design Graph Neural Architecture Paradigm (GAP) with tree-topology computation procedure and two types of fine-grained atomic operations.
Experiments show that our GNAS can search for better GNNs with multiple message-passing mechanisms and optimal message-passing depth.
arXiv Detail & Related papers (2021-03-26T06:10:41Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Self-Enhanced GNN: Improving Graph Neural Networks Using Model Outputs [20.197085398581397]
Graph neural networks (GNNs) have received much attention recently because of their excellent performance on graph-based tasks.
We propose self-enhanced GNN (SEG), which improves the quality of the input data using the outputs of existing GNN models.
SEG consistently improves the performance of well-known GNN models such as GCN, GAT and SGC across different datasets.
arXiv Detail & Related papers (2020-02-18T12:27:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.