MetaGL: Evaluation-Free Selection of Graph Learning Models via
Meta-Learning
- URL: http://arxiv.org/abs/2206.09280v3
- Date: Thu, 8 Jun 2023 23:11:48 GMT
- Title: MetaGL: Evaluation-Free Selection of Graph Learning Models via
Meta-Learning
- Authors: Namyong Park, Ryan Rossi, Nesreen Ahmed, Christos Faloutsos
- Abstract summary: We develop the first meta-learning approach for evaluation-free graph learning model selection, called MetaGL.
To quantify similarities across a wide variety of graphs, we introduce specialized meta-graph features.
Then we design G-M network, which represents the relations among graphs and models, and develop a graph-based meta-learner.
- Score: 17.70842402755857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a graph learning task, such as link prediction, on a new graph, how can
we select the best method as well as its hyperparameters (collectively called a
model) without having to train or evaluate any model on the new graph? Model
selection for graph learning has been largely ad hoc. A typical approach has
been to apply popular methods to new datasets, but this is often suboptimal. On
the other hand, systematically comparing models on the new graph quickly
becomes too costly, or even impractical. In this work, we develop the first
meta-learning approach for evaluation-free graph learning model selection,
called MetaGL, which utilizes the prior performances of existing methods on
various benchmark graph datasets to automatically select an effective model for
the new graph, without any model training or evaluations. To quantify
similarities across a wide variety of graphs, we introduce specialized
meta-graph features that capture the structural characteristics of a graph.
Then we design G-M network, which represents the relations among graphs and
models, and develop a graph-based meta-learner operating on this G-M network,
which estimates the relevance of each model to different graphs. Extensive
experiments show that using MetaGL to select a model for the new graph greatly
outperforms several existing meta-learning techniques tailored for graph
learning model selection (up to 47% better), while being extremely fast at test
time (~1 sec).
Related papers
- An Accurate Graph Generative Model with Tunable Features [0.8192907805418583]
We propose a method to improve the accuracy of GraphTune by adding a new mechanism to feed back errors of graph features.
Experiments on a real-world graph dataset showed that the features in the generated graphs are accurately tuned compared with conventional models.
arXiv Detail & Related papers (2023-09-03T12:34:15Z) - Challenging the Myth of Graph Collaborative Filtering: a Reasoned and Reproducibility-driven Analysis [50.972595036856035]
We present a code that successfully replicates results from six popular and recent graph recommendation models.
We compare these graph models with traditional collaborative filtering models that historically performed well in offline evaluations.
By investigating the information flow from users' neighborhoods, we aim to identify which models are influenced by intrinsic features in the dataset structure.
arXiv Detail & Related papers (2023-08-01T09:31:44Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Joint Graph Learning and Matching for Semantic Feature Correspondence [69.71998282148762]
We propose a joint emphgraph learning and matching network, named GLAM, to explore reliable graph structures for boosting graph matching.
The proposed method is evaluated on three popular visual matching benchmarks (Pascal VOC, Willow Object and SPair-71k)
It outperforms previous state-of-the-art graph matching methods by significant margins on all benchmarks.
arXiv Detail & Related papers (2021-09-01T08:24:02Z) - Weakly-supervised Graph Meta-learning for Few-shot Node Classification [53.36828125138149]
We propose a new graph meta-learning framework -- Graph Hallucination Networks (Meta-GHN)
Based on a new robustness-enhanced episodic training, Meta-GHN is meta-learned to hallucinate clean node representations from weakly-labeled data.
Extensive experiments demonstrate the superiority of Meta-GHN over existing graph meta-learning studies.
arXiv Detail & Related papers (2021-06-12T22:22:10Z) - Stochastic Iterative Graph Matching [11.128153575173213]
We propose a new model, Iterative Graph MAtching, to address the graph matching problem.
Our model defines a distribution of matchings for a graph pair so the model can explore a wide range of possible matchings.
We conduct extensive experiments across synthetic graph datasets as well as biochemistry and computer vision applications.
arXiv Detail & Related papers (2021-06-04T02:05:35Z) - Meta-Inductive Node Classification across Graphs [6.0471030308057285]
We propose a novel meta-inductive framework called MI-GNN to customize the inductive model to each graph.
MI-GNN does not directly learn an inductive model; it learns the general knowledge of how to train a model for semi-supervised node classification on new graphs.
Extensive experiments on five real-world graph collections demonstrate the effectiveness of our proposed model.
arXiv Detail & Related papers (2021-05-14T09:16:28Z) - A Tunable Model for Graph Generation Using LSTM and Conditional VAE [1.399948157377307]
We propose a generative model that can tune specific features, while learning structural features of a graph from data.
With a dataset of graphs with various features generated by a model, we confirm that our model can generate a graph with specific features.
arXiv Detail & Related papers (2021-04-15T06:47:14Z) - Graph Coding for Model Selection and Anomaly Detection in Gaussian
Graphical Models [2.752817022620644]
We extend description length for data analysis in Gaussian graphical models.
Our method uses universal graph coding methods to accurately account for model complexity.
Experiments show that our method gives better performance compared to commonly used methods.
arXiv Detail & Related papers (2021-02-04T06:13:52Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z) - Revisiting Graph based Collaborative Filtering: A Linear Residual Graph
Convolutional Network Approach [55.44107800525776]
Graph Convolutional Networks (GCNs) are state-of-the-art graph based representation learning models.
In this paper, we revisit GCN based Collaborative Filtering (CF) based Recommender Systems (RS)
We show that removing non-linearities would enhance recommendation performance, consistent with the theories in simple graph convolutional networks.
We propose a residual network structure that is specifically designed for CF with user-item interaction modeling.
arXiv Detail & Related papers (2020-01-28T04:41:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.