Learning to Match Features with Seeded Graph Matching Network
- URL: http://arxiv.org/abs/2108.08771v1
- Date: Thu, 19 Aug 2021 16:25:23 GMT
- Title: Learning to Match Features with Seeded Graph Matching Network
- Authors: Hongkai Chen, Zixin Luo, Jiahui Zhang, Lei Zhou, Xuyang Bai, Zeyu Hu,
Chiew-Lan Tai, Long Quan
- Abstract summary: We propose Seeded Graph Matching Network, a graph neural network with sparse structure to reduce redundant connectivity and learn compact representation.
Experiments show that our method reduces computational and memory complexity significantly compared with typical attention-based networks.
- Score: 35.70116378238535
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Matching local features across images is a fundamental problem in computer
vision. Targeting towards high accuracy and efficiency, we propose Seeded Graph
Matching Network, a graph neural network with sparse structure to reduce
redundant connectivity and learn compact representation. The network consists
of 1) Seeding Module, which initializes the matching by generating a small set
of reliable matches as seeds. 2) Seeded Graph Neural Network, which utilizes
seed matches to pass messages within/across images and predicts assignment
costs. Three novel operations are proposed as basic elements for message
passing: 1) Attentional Pooling, which aggregates keypoint features within the
image to seed matches. 2) Seed Filtering, which enhances seed features and
exchanges messages across images. 3) Attentional Unpooling, which propagates
seed features back to original keypoints. Experiments show that our method
reduces computational and memory complexity significantly compared with typical
attention-based networks while competitive or higher performance is achieved.
Related papers
- ResMatch: Residual Attention Learning for Local Feature Matching [51.07496081296863]
We rethink cross- and self-attention from the viewpoint of traditional feature matching and filtering.
We inject the similarity of descriptors and relative positions into cross- and self-attention score.
We mine intra- and inter-neighbors according to the similarity of descriptors and relative positions.
arXiv Detail & Related papers (2023-07-11T11:32:12Z) - Learning Feature Matching via Matchable Keypoint-Assisted Graph Neural
Network [52.29330138835208]
Accurately matching local features between a pair of images is a challenging computer vision task.
Previous studies typically use attention based graph neural networks (GNNs) with fully-connected graphs over keypoints within/across images.
We propose MaKeGNN, a sparse attention-based GNN architecture which bypasses non-repeatable keypoints and leverages matchable ones to guide message passing.
arXiv Detail & Related papers (2023-07-04T02:50:44Z) - MD-Net: Multi-Detector for Local Feature Extraction [0.0]
We propose a deep feature extraction network capable of detecting a predefined number of complementary sets of keypoints at each image.
We train our network to predict the keypoints and compute the corresponding descriptors jointly.
With extensive experiments we show that our network, trained only on synthetically warped images, achieves competitive results on 3D reconstruction and re-localization tasks.
arXiv Detail & Related papers (2022-08-10T13:52:31Z) - Image Keypoint Matching using Graph Neural Networks [22.33342295278866]
We propose a graph neural network for the problem of image matching.
The proposed method first generates initial soft correspondences between keypoints using localized node embeddings.
We evaluate our method on natural image datasets with keypoint annotations and show that, in comparison to a state-of-the-art model, our method speeds up inference times without sacrificing prediction accuracy.
arXiv Detail & Related papers (2022-05-27T23:38:44Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - DenseGAP: Graph-Structured Dense Correspondence Learning with Anchor
Points [15.953570826460869]
Establishing dense correspondence between two images is a fundamental computer vision problem.
We introduce DenseGAP, a new solution for efficient Dense correspondence learning with a Graph-structured neural network conditioned on Anchor Points.
Our method advances the state-of-the-art of correspondence learning on most benchmarks.
arXiv Detail & Related papers (2021-12-13T18:59:30Z) - Sequential Graph Convolutional Network for Active Learning [53.99104862192055]
We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN)
With a small number of randomly sampled images as seed labelled examples, we learn the parameters of the graph to distinguish labelled vs unlabelled nodes.
We exploit these characteristics of GCN to select the unlabelled examples which are sufficiently different from labelled ones.
arXiv Detail & Related papers (2020-06-18T00:55:10Z) - CRNet: Cross-Reference Networks for Few-Shot Segmentation [59.85183776573642]
Few-shot segmentation aims to learn a segmentation model that can be generalized to novel classes with only a few training images.
With a cross-reference mechanism, our network can better find the co-occurrent objects in the two images.
Experiments on the PASCAL VOC 2012 dataset show that our network achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-03-24T04:55:43Z) - An End-to-End Network for Co-Saliency Detection in One Single Image [47.35448093528382]
Co-saliency detection within a single image is a common vision problem that has not yet been well addressed.
This study proposes a novel end-to-end trainable network comprising a backbone net and two branch nets.
We construct a new dataset of 2,019 natural images with co-saliency in each image to evaluate the proposed method.
arXiv Detail & Related papers (2019-10-25T16:00:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.