Convolutional Hough Matching Networks
- URL: http://arxiv.org/abs/2103.16831v1
- Date: Wed, 31 Mar 2021 06:17:03 GMT
- Title: Convolutional Hough Matching Networks
- Authors: Juhong Min, Minsu Cho
- Abstract summary: We introduce a Hough transform perspective on convolutional matching and propose an effective geometric matching algorithm, dubbed Convolutional Hough Matching (CHM)
We cast it into a trainable neural layer with a semi-isotropic high-dimensional kernel, which learns non-rigid matching with a small number of interpretable parameters.
Our method sets a new state of the art on standard benchmarks for semantic visual correspondence, proving its strong robustness to challenging intra-class variations.
- Score: 39.524998833064956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite advances in feature representation, leveraging geometric relations is
crucial for establishing reliable visual correspondences under large variations
of images. In this work we introduce a Hough transform perspective on
convolutional matching and propose an effective geometric matching algorithm,
dubbed Convolutional Hough Matching (CHM). The method distributes similarities
of candidate matches over a geometric transformation space and evaluate them in
a convolutional manner. We cast it into a trainable neural layer with a
semi-isotropic high-dimensional kernel, which learns non-rigid matching with a
small number of interpretable parameters. To validate the effect, we develop
the neural network with CHM layers that perform convolutional matching in the
space of translation and scaling. Our method sets a new state of the art on
standard benchmarks for semantic visual correspondence, proving its strong
robustness to challenging intra-class variations.
Related papers
- Relative Representations: Topological and Geometric Perspectives [53.88896255693922]
Relative representations are an established approach to zero-shot model stitching.
We introduce a normalization procedure in the relative transformation, resulting in invariance to non-isotropic rescalings and permutations.
Second, we propose to deploy topological densification when fine-tuning relative representations, a topological regularization loss encouraging clustering within classes.
arXiv Detail & Related papers (2024-09-17T08:09:22Z) - Explicit Correspondence Matching for Generalizable Neural Radiance
Fields [49.49773108695526]
We present a new NeRF method that is able to generalize to new unseen scenarios and perform novel view synthesis with as few as two source views.
The explicit correspondence matching is quantified with the cosine similarity between image features sampled at the 2D projections of a 3D point on different views.
Our method achieves state-of-the-art results on different evaluation settings, with the experiments showing a strong correlation between our learned cosine feature similarity and volume density.
arXiv Detail & Related papers (2023-04-24T17:46:01Z) - Motion Estimation for Large Displacements and Deformations [7.99536002595393]
Variational optical flow techniques based on a coarse-to-fine scheme interpolate sparse matches and locally optimize an energy model conditioned on colour, gradient and smoothness.
This paper addresses this problem and presents HybridFlow, a variational motion estimation framework for large displacements and deformations.
arXiv Detail & Related papers (2022-06-24T18:53:22Z) - Revisiting Transformation Invariant Geometric Deep Learning: Are Initial
Representations All You Need? [80.86819657126041]
We show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance.
Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling.
We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks.
arXiv Detail & Related papers (2021-12-23T03:52:33Z) - Convolutional Hough Matching Networks for Robust and Efficient Visual
Correspondence [41.061667361696465]
We introduce a Hough transform perspective on convolutional matching and propose an effective geometric matching algorithm, dubbed Convolutional Hough Matching (CHM)
Our method sets a new state of the art on standard benchmarks for semantic visual correspondence, proving its strong robustness to challenging intra-class variations.
arXiv Detail & Related papers (2021-09-11T08:39:41Z) - Learning to Discover Reflection Symmetry via Polar Matching Convolution [33.77926792753373]
We introduce a new convolutional technique, dubbed the polar matching convolution, which leverages a polar feature pooling, a self-similarity encoding, and a kernel design for axes of different angles.
The proposed high-dimensional kernel convolution network effectively learns to discover symmetry patterns from real-world images.
Experiments demonstrate that our method outperforms state-of-the-art methods in terms of accuracy and robustness.
arXiv Detail & Related papers (2021-08-30T01:50:51Z) - Deep Transformation-Invariant Clustering [24.23117820167443]
We present an approach that does not rely on abstract features but instead learns to predict image transformations.
This learning process naturally fits in the gradient-based training of K-means and Gaussian mixture model.
We demonstrate that our novel approach yields competitive and highly promising results on standard image clustering benchmarks.
arXiv Detail & Related papers (2020-06-19T13:43:08Z) - Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric
graphs [81.12344211998635]
A common approach to define convolutions on meshes is to interpret them as a graph and apply graph convolutional networks (GCNs)
We propose Gauge Equivariant Mesh CNNs which generalize GCNs to apply anisotropic gauge equivariant kernels.
Our experiments validate the significantly improved expressivity of the proposed model over conventional GCNs and other methods.
arXiv Detail & Related papers (2020-03-11T17:21:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.