On Evaluation Metrics for Graph Generative Models
- URL: http://arxiv.org/abs/2201.09871v1
- Date: Mon, 24 Jan 2022 18:49:27 GMT
- Title: On Evaluation Metrics for Graph Generative Models
- Authors: Rylee Thompson, Boris Knyazev, Elahe Ghalebi, Jungtaek Kim, Graham W.
Taylor
- Abstract summary: We study existing graph generative models (GGMs) and neural-network-based metrics for evaluating GGMs.
Motivated by the power of certain Graph Neural Networks (GNNs) to extract meaningful graph representations without any training, we introduce several metrics based on the features extracted by an untrained random GNN.
- Score: 17.594098458581694
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In image generation, generative models can be evaluated naturally by visually
inspecting model outputs. However, this is not always the case for graph
generative models (GGMs), making their evaluation challenging. Currently, the
standard process for evaluating GGMs suffers from three critical limitations:
i) it does not produce a single score which makes model selection challenging,
ii) in many cases it fails to consider underlying edge and node features, and
iii) it is prohibitively slow to perform. In this work, we mitigate these
issues by searching for scalar, domain-agnostic, and scalable metrics for
evaluating and ranking GGMs. To this end, we study existing GGM metrics and
neural-network-based metrics emerging from generative models of images that use
embeddings extracted from a task-specific network. Motivated by the power of
certain Graph Neural Networks (GNNs) to extract meaningful graph
representations without any training, we introduce several metrics based on the
features extracted by an untrained random GNN. We design experiments to
thoroughly test metrics on their ability to measure the diversity and fidelity
of generated graphs, as well as their sample and computational efficiency.
Depending on the quantity of samples, we recommend one of two random-GNN-based
metrics that we show to be more expressive than pre-existing metrics. While we
focus on applying these metrics to GGM evaluation, in practice this enables the
ability to easily compute the dissimilarity between any two sets of graphs
regardless of domain. Our code is released at:
https://github.com/uoguelph-mlrg/GGM-metrics.
Related papers
- Graph Generative Models Evaluation with Masked Autoencoder [10.977907906989342]
We propose a novel method that leverages graph masked autoencoders to effectively extract graph features for graph generative models evaluations.
We conduct extensive experiments on graphs and empirically demonstrate that our method can be more reliable and effective than previously proposed methods.
arXiv Detail & Related papers (2025-03-17T15:23:21Z) - Generalization of Graph Neural Networks is Robust to Model Mismatch [84.01980526069075]
Graph neural networks (GNNs) have demonstrated their effectiveness in various tasks supported by their generalization capabilities.
In this paper, we examine GNNs that operate on geometric graphs generated from manifold models.
Our analysis reveals the robustness of the GNN generalization in the presence of such model mismatch.
arXiv Detail & Related papers (2024-08-25T16:00:44Z) - GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels [81.93520935479984]
We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
arXiv Detail & Related papers (2023-10-23T05:51:59Z) - GRAPES: Learning to Sample Graphs for Scalable Graph Neural Networks [2.4175455407547015]
Graph neural networks learn to represent nodes by aggregating information from their neighbors.
Several existing methods address this by sampling a small subset of nodes, scaling GNNs to much larger graphs.
We introduce GRAPES, an adaptive sampling method that learns to identify the set of nodes crucial for training a GNN.
arXiv Detail & Related papers (2023-10-05T09:08:47Z) - GRAN is superior to GraphRNN: node orderings, kernel- and graph
embeddings-based metrics for graph generators [0.6816499294108261]
We study kernel-based metrics on distributions of graph invariants and manifold-based and kernel-based metrics in graph embedding space.
We compare GraphRNN and GRAN, two well-known generative models for graphs, and unveil the influence of node orderings.
arXiv Detail & Related papers (2023-07-13T12:07:39Z) - SizeShiftReg: a Regularization Method for Improving Size-Generalization
in Graph Neural Networks [5.008597638379227]
Graph neural networks (GNNs) have become the de facto model of choice for graph classification.
We propose a regularization strategy that can be applied to any GNN to improve its generalization capabilities without requiring access to the test data.
Our regularization is based on the idea of simulating a shift in the size of the training graphs using coarsening techniques.
arXiv Detail & Related papers (2022-07-16T09:50:45Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Evaluating Graph Generative Models with Contrastively Learned Features [9.603362400275868]
We show that Graph Substructure Networks (GSNs) are better at distinguishing the distances between graph datasets.
We propose using representations from contrastively trained GNNs, rather than random GNNs, and show this gives more reliable evaluation metrics.
arXiv Detail & Related papers (2022-06-13T15:14:41Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Scalable Graph Neural Networks for Heterogeneous Graphs [12.44278942365518]
Graph neural networks (GNNs) are a popular class of parametric model for learning over graph-structured data.
Recent work has argued that GNNs primarily use the graph for feature smoothing, and have shown competitive results on benchmark tasks.
In this work, we ask whether these results can be extended to heterogeneous graphs, which encode multiple types of relationship between different entities.
arXiv Detail & Related papers (2020-11-19T06:03:35Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.