Deep Matching Prior: Test-Time Optimization for Dense Correspondence
- URL: http://arxiv.org/abs/2106.03090v1
- Date: Sun, 6 Jun 2021 10:56:01 GMT
- Title: Deep Matching Prior: Test-Time Optimization for Dense Correspondence
- Authors: Sunghwan Hong, Seungryong Kim
- Abstract summary: We show that an image pair-specific prior can be captured by solely optimizing the untrained matching networks on an input pair of images.
Experiments demonstrate that our framework, dubbed Deep Matching Prior (DMP), is competitive, or even outperforms, against the latest learning-based methods.
- Score: 37.492074298574664
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conventional techniques to establish dense correspondences across visually or
semantically similar images focused on designing a task-specific matching
prior, which is difficult to model. To overcome this, recent learning-based
methods have attempted to learn a good matching prior within a model itself on
large training data. The performance improvement was apparent, but the need for
sufficient training data and intensive learning hinders their applicability.
Moreover, using the fixed model at test time does not account for the fact that
a pair of images may require their own prior, thus providing limited
performance and poor generalization to unseen images. In this paper, we show
that an image pair-specific prior can be captured by solely optimizing the
untrained matching networks on an input pair of images. Tailored for such
test-time optimization for dense correspondence, we present a residual matching
network and a confidence-aware contrastive loss to guarantee a meaningful
convergence. Experiments demonstrate that our framework, dubbed Deep Matching
Prior (DMP), is competitive, or even outperforms, against the latest
learning-based methods on several benchmarks for geometric matching and
semantic matching, even though it requires neither large training data nor
intensive learning. With the networks pre-trained, DMP attains state-of-the-art
performance on all benchmarks.
Related papers
- Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - Match me if you can: Semi-Supervised Semantic Correspondence Learning with Unpaired Images [76.47980643420375]
This paper builds on the hypothesis that there is an inherent data-hungry matter in learning semantic correspondences.
We demonstrate a simple machine annotator reliably enriches paired key points via machine supervision.
Our models surpass current state-of-the-art models on semantic correspondence learning benchmarks like SPair-71k, PF-PASCAL, and PF-WILLOW.
arXiv Detail & Related papers (2023-11-30T13:22:15Z) - Self-Supervised Pretraining for 2D Medical Image Segmentation [0.0]
Self-supervised learning offers a way to lower the need for manually annotated data by pretraining models for a specific domain on unlabelled data.
We find that self-supervised pretraining on natural images and target-domain-specific images leads to the fastest and most stable downstream convergence.
In low-data scenarios, supervised ImageNet pretraining achieves the best accuracy, requiring less than 100 annotated samples to realise close to minimal error.
arXiv Detail & Related papers (2022-09-01T09:25:22Z) - Meta-Registration: Learning Test-Time Optimization for Single-Pair Image
Registration [0.37501702548174964]
This work formulates image registration as a meta-learning algorithm.
Experiments are presented in this paper using clinical transrectal ultrasound image data from 108 prostate cancer patients.
arXiv Detail & Related papers (2022-07-22T10:30:00Z) - Deep Translation Prior: Test-time Training for Photorealistic Style
Transfer [36.82737412912885]
Recent techniques to solve photorealistic style transfer within deep convolutional neural networks (CNNs) generally require intensive training from large-scale datasets.
We propose a novel framework, dubbed Deep Translation Prior (DTP), to accomplish photorealistic style transfer through test-time training on given input image pair with untrained networks.
arXiv Detail & Related papers (2021-12-12T04:54:27Z) - Learning Contrastive Representation for Semantic Correspondence [150.29135856909477]
We propose a multi-level contrastive learning approach for semantic matching.
We show that image-level contrastive learning is a key component to encourage the convolutional features to find correspondence between similar objects.
arXiv Detail & Related papers (2021-09-22T18:34:14Z) - Warp Consistency for Unsupervised Learning of Dense Correspondences [116.56251250853488]
Key challenge in learning dense correspondences is lack of ground-truth matches for real image pairs.
We propose Warp Consistency, an unsupervised learning objective for dense correspondence regression.
Our approach sets a new state-of-the-art on several challenging benchmarks, including MegaDepth, RobotCar and TSS.
arXiv Detail & Related papers (2021-04-07T17:58:22Z) - Unsupervised Learning of Visual Features by Contrasting Cluster
Assignments [57.33699905852397]
We propose an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons.
Our method simultaneously clusters the data while enforcing consistency between cluster assignments.
Our method can be trained with large and small batches and can scale to unlimited amounts of data.
arXiv Detail & Related papers (2020-06-17T14:00:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.