MGCN: Descriptor Learning using Multiscale GCNs
- URL: http://arxiv.org/abs/2001.10472v3
- Date: Fri, 7 Aug 2020 10:18:28 GMT
- Title: MGCN: Descriptor Learning using Multiscale GCNs
- Authors: Yiqun Wang, Jing Ren, Dong-Ming Yan, Jianwei Guo, Xiaopeng Zhang,
Peter Wonka
- Abstract summary: We present a new non-learned feature that uses graph wavelets to decompose the Dirichlet energy on a surface.
We also propose a new graph convolutional network (MGCN) to transform a non-learned feature to a more discriminative descriptor.
- Score: 50.14172863706108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel framework for computing descriptors for characterizing
points on three-dimensional surfaces. First, we present a new non-learned
feature that uses graph wavelets to decompose the Dirichlet energy on a
surface. We call this new feature wavelet energy decomposition signature
(WEDS). Second, we propose a new multiscale graph convolutional network (MGCN)
to transform a non-learned feature to a more discriminative descriptor. Our
results show that the new descriptor WEDS is more discriminative than the
current state-of-the-art non-learned descriptors and that the combination of
WEDS and MGCN is better than the state-of-the-art learned descriptors. An
important design criterion for our descriptor is the robustness to different
surface discretizations including triangulations with varying numbers of
vertices. Our results demonstrate that previous graph convolutional networks
significantly overfit to a particular resolution or even a particular
triangulation, but MGCN generalizes well to different surface discretizations.
In addition, MGCN is compatible with previous descriptors and it can also be
used to improve the performance of other descriptors, such as the heat kernel
signature, the wave kernel signature, or the local point signature.
Related papers
- ALIKED: A Lighter Keypoint and Descriptor Extraction Network via
Deformable Transformation [27.04762347838776]
We propose the Sparse Deformable Descriptor Head (SDDH), which learns the deformable positions of supporting features for each keypoint and constructs deformable descriptors.
We show that the proposed network is both efficient and powerful in various visual measurement tasks, including image matching, 3D reconstruction, and visual relocalization.
arXiv Detail & Related papers (2023-04-07T12:05:39Z) - Hierarchical Prototype Networks for Continual Graph Representation
Learning [90.78466005753505]
We present Hierarchical Prototype Networks (HPNs) which extract different levels of abstract knowledge in the form of prototypes to represent the continuously expanded graphs.
We show that HPNs not only outperform state-of-the-art baseline techniques but also consume relatively less memory.
arXiv Detail & Related papers (2021-11-30T14:15:14Z) - UPDesc: Unsupervised Point Descriptor Learning for Robust Registration [54.95201961399334]
UPDesc is an unsupervised method to learn point descriptors for robust point cloud registration.
We show that our learned descriptors yield superior performance over existing unsupervised methods.
arXiv Detail & Related papers (2021-08-05T17:11:08Z) - SpinNet: Learning a General Surface Descriptor for 3D Point Cloud
Registration [57.28608414782315]
We introduce a new, yet conceptually simple, neural architecture, termed SpinNet, to extract local features.
Experiments on both indoor and outdoor datasets demonstrate that SpinNet outperforms existing state-of-the-art techniques.
arXiv Detail & Related papers (2020-11-24T15:00:56Z) - Locality Preserving Dense Graph Convolutional Networks with Graph
Context-Aware Node Representations [19.623379678611744]
Graph convolutional networks (GCNs) have been widely used for representation learning on graph data.
In many graph classification applications, GCN-based approaches have outperformed traditional methods.
We propose a locality-preserving dense GCN with graph context-aware node representations.
arXiv Detail & Related papers (2020-10-12T02:12:27Z) - RGCF: Refined Graph Convolution Collaborative Filtering with concise and
expressive embedding [42.46797662323393]
We develop a new GCN-based Collaborative Filtering model, named Refined Graph convolution Collaborative Filtering(RGCF)
RGCF is more capable for capturing the implicit high-order connectivities inside the graph and the resultant vector representations are more expressive.
We conduct extensive experiments on three public million-size datasets, demonstrating that our RGCF significantly outperforms state-of-the-art models.
arXiv Detail & Related papers (2020-07-07T12:26:10Z) - Sequential Graph Convolutional Network for Active Learning [53.99104862192055]
We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN)
With a small number of randomly sampled images as seed labelled examples, we learn the parameters of the graph to distinguish labelled vs unlabelled nodes.
We exploit these characteristics of GCN to select the unlabelled examples which are sufficiently different from labelled ones.
arXiv Detail & Related papers (2020-06-18T00:55:10Z) - PointGMM: a Neural GMM Network for Point Clouds [83.9404865744028]
Point clouds are popular representation for 3D shapes, but encode a particular sampling without accounting for shape priors or non-local information.
We present PointGMM, a neural network that learns to generate hGMMs which are characteristic of the shape class.
We show that as a generative model, PointGMM learns a meaningful latent space which enables generating consistents between existing shapes.
arXiv Detail & Related papers (2020-03-30T10:34:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.