MetaGL: Evaluation-Free Selection of Graph Learning Models via
Meta-Learning
- URL: http://arxiv.org/abs/2206.09280v3
- Date: Thu, 8 Jun 2023 23:11:48 GMT
- Title: MetaGL: Evaluation-Free Selection of Graph Learning Models via
Meta-Learning
- Authors: Namyong Park, Ryan Rossi, Nesreen Ahmed, Christos Faloutsos
- Abstract summary: We develop the first meta-learning approach for evaluation-free graph learning model selection, called MetaGL.
To quantify similarities across a wide variety of graphs, we introduce specialized meta-graph features.
Then we design G-M network, which represents the relations among graphs and models, and develop a graph-based meta-learner.
- Score: 17.70842402755857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a graph learning task, such as link prediction, on a new graph, how can
we select the best method as well as its hyperparameters (collectively called a
model) without having to train or evaluate any model on the new graph? Model
selection for graph learning has been largely ad hoc. A typical approach has
been to apply popular methods to new datasets, but this is often suboptimal. On
the other hand, systematically comparing models on the new graph quickly
becomes too costly, or even impractical. In this work, we develop the first
meta-learning approach for evaluation-free graph learning model selection,
called MetaGL, which utilizes the prior performances of existing methods on
various benchmark graph datasets to automatically select an effective model for
the new graph, without any model training or evaluations. To quantify
similarities across a wide variety of graphs, we introduce specialized
meta-graph features that capture the structural characteristics of a graph.
Then we design G-M network, which represents the relations among graphs and
models, and develop a graph-based meta-learner operating on this G-M network,
which estimates the relevance of each model to different graphs. Extensive
experiments show that using MetaGL to select a model for the new graph greatly
outperforms several existing meta-learning techniques tailored for graph
learning model selection (up to 47% better), while being extremely fast at test
time (~1 sec).
Related papers
- An Automatic Graph Construction Framework based on Large Language Models for Recommendation [49.51799417575638]
We introduce AutoGraph, an automatic graph construction framework based on large language models for recommendation.
LLMs infer the user preference and item knowledge, which is encoded as semantic vectors.
Latent factors are incorporated as extra nodes to link the user/item nodes, resulting in a graph with in-depth global-view semantics.
arXiv Detail & Related papers (2024-12-24T07:51:29Z) - Towards Graph Foundation Models: Learning Generalities Across Graphs via Task-Trees [50.78679002846741]
We introduce a novel approach for learning cross-task generalities in graphs.
We propose task-trees as basic learning instances to align task spaces on graphs.
Our findings indicate that when a graph neural network is pretrained on diverse task-trees, it acquires transferable knowledge.
arXiv Detail & Related papers (2024-12-21T02:07:43Z) - LLM-Based Multi-Agent Systems are Scalable Graph Generative Models [73.28294528654885]
GraphAgent-Generator (GAG) is a novel simulation-based framework for dynamic, text-attributed social graph generation.
GAG simulates the temporal node and edge generation processes for zero-shot social graph generation.
The resulting graphs exhibit adherence to seven key macroscopic network properties, achieving an 11% improvement in microscopic graph structure metrics.
arXiv Detail & Related papers (2024-10-13T12:57:08Z) - An Accurate Graph Generative Model with Tunable Features [0.8192907805418583]
We propose a method to improve the accuracy of GraphTune by adding a new mechanism to feed back errors of graph features.
Experiments on a real-world graph dataset showed that the features in the generated graphs are accurately tuned compared with conventional models.
arXiv Detail & Related papers (2023-09-03T12:34:15Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Joint Graph Learning and Matching for Semantic Feature Correspondence [69.71998282148762]
We propose a joint emphgraph learning and matching network, named GLAM, to explore reliable graph structures for boosting graph matching.
The proposed method is evaluated on three popular visual matching benchmarks (Pascal VOC, Willow Object and SPair-71k)
It outperforms previous state-of-the-art graph matching methods by significant margins on all benchmarks.
arXiv Detail & Related papers (2021-09-01T08:24:02Z) - Weakly-supervised Graph Meta-learning for Few-shot Node Classification [53.36828125138149]
We propose a new graph meta-learning framework -- Graph Hallucination Networks (Meta-GHN)
Based on a new robustness-enhanced episodic training, Meta-GHN is meta-learned to hallucinate clean node representations from weakly-labeled data.
Extensive experiments demonstrate the superiority of Meta-GHN over existing graph meta-learning studies.
arXiv Detail & Related papers (2021-06-12T22:22:10Z) - Stochastic Iterative Graph Matching [11.128153575173213]
We propose a new model, Iterative Graph MAtching, to address the graph matching problem.
Our model defines a distribution of matchings for a graph pair so the model can explore a wide range of possible matchings.
We conduct extensive experiments across synthetic graph datasets as well as biochemistry and computer vision applications.
arXiv Detail & Related papers (2021-06-04T02:05:35Z) - Meta-Inductive Node Classification across Graphs [6.0471030308057285]
We propose a novel meta-inductive framework called MI-GNN to customize the inductive model to each graph.
MI-GNN does not directly learn an inductive model; it learns the general knowledge of how to train a model for semi-supervised node classification on new graphs.
Extensive experiments on five real-world graph collections demonstrate the effectiveness of our proposed model.
arXiv Detail & Related papers (2021-05-14T09:16:28Z) - A Tunable Model for Graph Generation Using LSTM and Conditional VAE [1.399948157377307]
We propose a generative model that can tune specific features, while learning structural features of a graph from data.
With a dataset of graphs with various features generated by a model, we confirm that our model can generate a graph with specific features.
arXiv Detail & Related papers (2021-04-15T06:47:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.