Training Free Graph Neural Networks for Graph Matching
- URL: http://arxiv.org/abs/2201.05349v1
- Date: Fri, 14 Jan 2022 09:04:46 GMT
- Title: Training Free Graph Neural Networks for Graph Matching
- Authors: Zhiyuan Liu, Yixin Cao, Fuli Feng, Xiang Wang, Xindi Shang, Jie Tang,
Kenji Kawaguchi, Tat-Seng Chua
- Abstract summary: TFGM is a framework to boost the performance of Graph Neural Networks (GNNs) based graph matching without training.
Applying TFGM on various GNNs shows promising improvements over baselines.
- Score: 103.45755859119035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present TFGM (Training Free Graph Matching), a framework to boost the
performance of Graph Neural Networks (GNNs) based graph matching without
training. TFGM sidesteps two crucial problems when training GNNs: 1) the
limited supervision due to expensive annotation, and 2) training's
computational cost. A basic framework, BasicTFGM, is first proposed by adopting
the inference stage of graph matching methods. Our analysis shows that the
BasicTFGM is a linear relaxation to the quadratic assignment formulation of
graph matching. This guarantees the preservation of structure compatibility and
an efficient polynomial complexity. Empirically, we further improve the
BasicTFGM by handcrafting two types of matching priors into the architecture of
GNNs: comparing node neighborhoods of different localities and utilizing
annotation data if available. For evaluation, we conduct extensive experiments
on a broad set of settings, including supervised keypoint matching between
images, semi-supervised entity alignment between knowledge graphs, and
unsupervised alignment between protein interaction networks. Applying TFGM on
various GNNs shows promising improvements over baselines. Further ablation
studies demonstrate the effective and efficient training-free property of TFGM.
Our code is available at
https://github.com/acharkq/Training-Free-Graph-Matching.
Related papers
- Efficient Graph Similarity Computation with Alignment Regularization [7.143879014059894]
Graph similarity computation (GSC) is a learning-based prediction task using Graph Neural Networks (GNNs)
We show that high-quality learning can be attained with a simple yet powerful regularization technique, which we call the Alignment Regularization (AReg)
In the inference stage, the graph-level representations learned by the GNN encoder are directly used to compute the similarity score without using AReg again to speed up inference.
arXiv Detail & Related papers (2024-06-21T07:37:28Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - A Class-Aware Representation Refinement Framework for Graph Classification [8.998543739618077]
We propose a Class-Aware Representation rEfinement (CARE) framework for the task of graph classification.
CARE computes simple yet powerful class representations and injects them to steer the learning of graph representations towards better class separability.
Our experiments with 11 well-known GNN backbones on 9 benchmark datasets validate the superiority and effectiveness of CARE over its GNN counterparts.
arXiv Detail & Related papers (2022-09-02T10:18:33Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - GPN: A Joint Structural Learning Framework for Graph Neural Networks [36.38529113603987]
We propose a GNN-based joint learning framework that simultaneously learns the graph structure and the downstream task.
Our method is the first GNN-based bilevel optimization framework for resolving this task.
arXiv Detail & Related papers (2022-05-12T09:06:04Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.