Warp Consistency for Unsupervised Learning of Dense Correspondences
- URL: http://arxiv.org/abs/2104.03308v2
- Date: Thu, 8 Apr 2021 13:06:59 GMT
- Title: Warp Consistency for Unsupervised Learning of Dense Correspondences
- Authors: Prune Truong and Martin Danelljan and Fisher Yu and Luc Van Gool
- Abstract summary: Key challenge in learning dense correspondences is lack of ground-truth matches for real image pairs.
We propose Warp Consistency, an unsupervised learning objective for dense correspondence regression.
Our approach sets a new state-of-the-art on several challenging benchmarks, including MegaDepth, RobotCar and TSS.
- Score: 116.56251250853488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The key challenge in learning dense correspondences lies in the lack of
ground-truth matches for real image pairs. While photometric consistency losses
provide unsupervised alternatives, they struggle with large appearance changes,
which are ubiquitous in geometric and semantic matching tasks. Moreover,
methods relying on synthetic training pairs often suffer from poor
generalisation to real data.
We propose Warp Consistency, an unsupervised learning objective for dense
correspondence regression. Our objective is effective even in settings with
large appearance and view-point changes. Given a pair of real images, we first
construct an image triplet by applying a randomly sampled warp to one of the
original images. We derive and analyze all flow-consistency constraints arising
between the triplet. From our observations and empirical results, we design a
general unsupervised objective employing two of the derived constraints. We
validate our warp consistency loss by training three recent dense
correspondence networks for the geometric and semantic matching tasks. Our
approach sets a new state-of-the-art on several challenging benchmarks,
including MegaDepth, RobotCar and TSS. Code and models will be released at
https://github.com/PruneTruong/DenseMatching.
Related papers
- Match me if you can: Semi-Supervised Semantic Correspondence Learning with Unpaired Images [76.47980643420375]
This paper builds on the hypothesis that there is an inherent data-hungry matter in learning semantic correspondences.
We demonstrate a simple machine annotator reliably enriches paired key points via machine supervision.
Our models surpass current state-of-the-art models on semantic correspondence learning benchmarks like SPair-71k, PF-PASCAL, and PF-WILLOW.
arXiv Detail & Related papers (2023-11-30T13:22:15Z) - Probabilistic Warp Consistency for Weakly-Supervised Semantic
Correspondences [118.6018141306409]
We propose Probabilistic Warp Consistency, a weakly-supervised learning objective for semantic matching.
We first construct an image triplet by applying a known warp to one of the images in a pair depicting different instances of the same object class.
Our objective also brings substantial improvements in the strongly-supervised regime, when combined with keypoint annotations.
arXiv Detail & Related papers (2022-03-08T18:55:11Z) - Learning Contrastive Representation for Semantic Correspondence [150.29135856909477]
We propose a multi-level contrastive learning approach for semantic matching.
We show that image-level contrastive learning is a key component to encourage the convolutional features to find correspondence between similar objects.
arXiv Detail & Related papers (2021-09-22T18:34:14Z) - Semi-supervised Dense Keypoints Using Unlabeled Multiview Images [22.449168666514677]
This paper presents a new end-to-end semi-supervised framework to learn a dense keypoint detector using unlabeled multiview images.
A key challenge lies in finding the exact correspondences between the dense keypoints in multiple views.
We derive a new probabilistic epipolar constraint that encodes the two desired properties.
arXiv Detail & Related papers (2021-09-20T04:57:57Z) - Deep Matching Prior: Test-Time Optimization for Dense Correspondence [37.492074298574664]
We show that an image pair-specific prior can be captured by solely optimizing the untrained matching networks on an input pair of images.
Experiments demonstrate that our framework, dubbed Deep Matching Prior (DMP), is competitive, or even outperforms, against the latest learning-based methods.
arXiv Detail & Related papers (2021-06-06T10:56:01Z) - Recurrent Multi-view Alignment Network for Unsupervised Surface
Registration [79.72086524370819]
Learning non-rigid registration in an end-to-end manner is challenging due to the inherent high degrees of freedom and the lack of labeled training data.
We propose to represent the non-rigid transformation with a point-wise combination of several rigid transformations.
We also introduce a differentiable loss function that measures the 3D shape similarity on the projected multi-view 2D depth images.
arXiv Detail & Related papers (2020-11-24T14:22:42Z) - Unsupervised Landmark Learning from Unpaired Data [117.81440795184587]
Recent attempts for unsupervised landmark learning leverage synthesized image pairs that are similar in appearance but different in poses.
We propose a cross-image cycle consistency framework which applies the swapping-reconstruction strategy twice to obtain the final supervision.
Our proposed framework is shown to outperform strong baselines by a large margin.
arXiv Detail & Related papers (2020-06-29T13:57:20Z) - Robust Face Verification via Disentangled Representations [20.393894616979402]
We introduce a robust algorithm for face verification, deciding whether twoimages are of the same person or not.
We use the generativemodel during training as an online augmentation method instead of a test-timepurifier that removes adversarial noise.
We experimentally show that, when coupled with adversarial training, the proposed scheme converges with aweak inner solver and has a higher clean and robust accuracy than state-of-the-art-methods when evaluated against white-box physical attacks.
arXiv Detail & Related papers (2020-06-05T19:17:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.