MGNet: Learning Correspondences via Multiple Graphs
- URL: http://arxiv.org/abs/2401.04984v1
- Date: Wed, 10 Jan 2024 07:58:44 GMT
- Title: MGNet: Learning Correspondences via Multiple Graphs
- Authors: Luanyuan Dai, Xiaoyu Du, Hanwang Zhang, Jinhui Tang
- Abstract summary: Learning correspondences aims to find correct correspondences from the initial correspondence set with an uneven correspondence distribution and a low inlier rate.
Recent advances usually use graph neural networks (GNNs) to build a single type of graph or stack local graphs into the global one to complete the task.
We propose MGNet to effectively combine multiple complementary graphs.
- Score: 78.0117352211091
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning correspondences aims to find correct correspondences (inliers) from
the initial correspondence set with an uneven correspondence distribution and a
low inlier rate, which can be regarded as graph data. Recent advances usually
use graph neural networks (GNNs) to build a single type of graph or simply
stack local graphs into the global one to complete the task. But they ignore
the complementary relationship between different types of graphs, which can
effectively capture potential relationships among sparse correspondences. To
address this problem, we propose MGNet to effectively combine multiple
complementary graphs. To obtain information integrating implicit and explicit
local graphs, we construct local graphs from implicit and explicit aspects and
combine them effectively, which is used to build a global graph. Moreover, we
propose Graph~Soft~Degree~Attention (GSDA) to make full use of all sparse
correspondence information at once in the global graph, which can capture and
amplify discriminative features. Extensive experiments demonstrate that MGNet
outperforms state-of-the-art methods in different visual tasks. The code is
provided in https://github.com/DAILUANYUAN/MGNet-2024AAAI.
Related papers
- Learning on Large Graphs using Intersecting Communities [13.053266613831447]
MPNNs iteratively update each node's representation in an input graph by aggregating messages from the node's neighbors.
MPNNs might quickly become prohibitive for large graphs provided they are not very sparse.
We propose approximating the input graph as an intersecting community graph (ICG) -- a combination of intersecting cliques.
arXiv Detail & Related papers (2024-05-31T09:26:26Z) - G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering [61.93058781222079]
We develop a flexible question-answering framework targeting real-world textual graphs.
We introduce the first retrieval-augmented generation (RAG) approach for general textual graphs.
G-Retriever performs RAG over a graph by formulating this task as a Prize-Collecting Steiner Tree optimization problem.
arXiv Detail & Related papers (2024-02-12T13:13:04Z) - Hybrid Graph: A Unified Graph Representation with Datasets and
Benchmarks for Complex Graphs [27.24150788635981]
We introduce the concept of hybrid graphs and present the Hybrid Graph Benchmark (HGB)
HGB contains 23 real-world hybrid graph datasets across various domains such as biology, social media, and e-commerce.
We provide an evaluation framework and a supporting framework to facilitate the training and evaluation of Graph Neural Networks (GNNs) on HGB.
arXiv Detail & Related papers (2023-06-08T11:15:34Z) - Learnable Graph Matching: A Practical Paradigm for Data Association [74.28753343714858]
We propose a general learnable graph matching method to address these issues.
Our method achieves state-of-the-art performance on several MOT datasets.
For image matching, our method outperforms state-of-the-art methods on a popular indoor dataset, ScanNet.
arXiv Detail & Related papers (2023-03-27T17:39:00Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Multi-Level Graph Contrastive Learning [38.022118893733804]
We propose a Multi-Level Graph Contrastive Learning (MLGCL) framework for learning robust representation of graph data by contrasting space views of graphs.
The original graph is first-order approximation structure and contains uncertainty or error, while the $k$NN graph generated by encoding features preserves high-order proximity.
Extensive experiments indicate MLGCL achieves promising results compared with the existing state-of-the-art graph representation learning methods on seven datasets.
arXiv Detail & Related papers (2021-07-06T14:24:43Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Scalable Graph Neural Networks for Heterogeneous Graphs [12.44278942365518]
Graph neural networks (GNNs) are a popular class of parametric model for learning over graph-structured data.
Recent work has argued that GNNs primarily use the graph for feature smoothing, and have shown competitive results on benchmark tasks.
In this work, we ask whether these results can be extended to heterogeneous graphs, which encode multiple types of relationship between different entities.
arXiv Detail & Related papers (2020-11-19T06:03:35Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.