Graph Context Transformation Learning for Progressive Correspondence
Pruning
- URL: http://arxiv.org/abs/2312.15971v1
- Date: Tue, 26 Dec 2023 09:43:30 GMT
- Title: Graph Context Transformation Learning for Progressive Correspondence
Pruning
- Authors: Junwen Guo, Guobao Xiao, Shiping Wang, Jun Yu
- Abstract summary: We propose Graph Context Transformation Network (GCT-Net) enhancing context information to conduct consensus guidance for progressive correspondence pruning.
Specifically, we design the Graph Context Enhance Transformer which first generates the graph network and then transforms it into multi-branch graph contexts.
To further apply the recalibrated graph contexts to the global domain, we propose the Graph Context Guidance Transformer.
- Score: 26.400567961735234
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most of existing correspondence pruning methods only concentrate on gathering
the context information as much as possible while neglecting effective ways to
utilize such information. In order to tackle this dilemma, in this paper we
propose Graph Context Transformation Network (GCT-Net) enhancing context
information to conduct consensus guidance for progressive correspondence
pruning. Specifically, we design the Graph Context Enhance Transformer which
first generates the graph network and then transforms it into multi-branch
graph contexts. Moreover, it employs self-attention and cross-attention to
magnify characteristics of each graph context for emphasizing the unique as
well as shared essential information. To further apply the recalibrated graph
contexts to the global domain, we propose the Graph Context Guidance
Transformer. This module adopts a confident-based sampling strategy to
temporarily screen high-confidence vertices for guiding accurate classification
by searching global consensus between screened vertices and remaining ones. The
extensive experimental results on outlier removal and relative pose estimation
clearly demonstrate the superior performance of GCT-Net compared to
state-of-the-art methods across outdoor and indoor datasets. The source code
will be available at: https://github.com/guobaoxiao/GCT-Net/.
Related papers
- A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - Graph Transformer GANs with Graph Masked Modeling for Architectural
Layout Generation [153.92387500677023]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The proposed graph Transformer encoder combines graph convolutions and self-attentions in a Transformer to model both local and global interactions.
We also propose a novel self-guided pre-training method for graph representation learning.
arXiv Detail & Related papers (2024-01-15T14:36:38Z) - Feature propagation as self-supervision signals on graphs [0.0]
Regularized Graph Infomax (RGI) is a simple yet effective framework for node level self-supervised learning.
We show that RGI can achieve state-of-the-art performance regardless of its simplicity.
arXiv Detail & Related papers (2023-03-15T14:20:06Z) - You Only Transfer What You Share: Intersection-Induced Graph Transfer
Learning for Link Prediction [79.15394378571132]
We investigate a previously overlooked phenomenon: in many cases, a densely connected, complementary graph can be found for the original graph.
The denser graph may share nodes with the original graph, which offers a natural bridge for transferring selective, meaningful knowledge.
We identify this setting as Graph Intersection-induced Transfer Learning (GITL), which is motivated by practical applications in e-commerce or academic co-authorship predictions.
arXiv Detail & Related papers (2023-02-27T22:56:06Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Representing Long-Range Context for Graph Neural Networks with Global
Attention [37.212747564546156]
We propose the use of Transformer-based self-attention to learn long-range pairwise relationships.
Our method, which we call GraphTrans, applies a permutation-invariant Transformer module after a standard GNN module.
Our results suggest that purely-learning-based approaches without graph structure may be suitable for learning high-level, long-range relationships on graphs.
arXiv Detail & Related papers (2022-01-21T18:16:21Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Learnable Graph Matching: Incorporating Graph Partitioning with Deep
Feature Learning for Multiple Object Tracking [58.30147362745852]
Data association across frames is at the core of Multiple Object Tracking (MOT) task.
Existing methods mostly ignore the context information among tracklets and intra-frame detections.
We propose a novel learnable graph matching method to address these issues.
arXiv Detail & Related papers (2021-03-30T08:58:45Z) - Sub-graph Contrast for Scalable Self-Supervised Graph Representation
Learning [21.0019144298605]
Existing graph neural networks fed with the complete graph data are not scalable due to limited computation and memory costs.
textscSubg-Con is proposed by utilizing the strong correlation between central nodes and their sampled subgraphs to capture regional structure information.
Compared with existing graph representation learning approaches, textscSubg-Con has prominent performance advantages in weaker supervision requirements, model learning scalability, and parallelization.
arXiv Detail & Related papers (2020-09-22T01:58:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.