Correlation Verification for Image Retrieval
- URL: http://arxiv.org/abs/2204.01458v1
- Date: Mon, 4 Apr 2022 13:18:49 GMT
- Title: Correlation Verification for Image Retrieval
- Authors: Seongwon Lee, Hongje Seong, Suhyeon Lee, Euntai Kim
- Abstract summary: We propose a novel image retrieval re-ranking network named Correlation Verification Networks (CVNet)
CVNet compresses dense feature correlation into image similarity while learning diverse geometric matching patterns from various image pairs.
Our proposed network shows state-of-the-art performance on several retrieval benchmarks with a significant margin.
- Score: 15.823918683848877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Geometric verification is considered a de facto solution for the re-ranking
task in image retrieval. In this study, we propose a novel image retrieval
re-ranking network named Correlation Verification Networks (CVNet). Our
proposed network, comprising deeply stacked 4D convolutional layers, gradually
compresses dense feature correlation into image similarity while learning
diverse geometric matching patterns from various image pairs. To enable
cross-scale matching, it builds feature pyramids and constructs cross-scale
feature correlations within a single inference, replacing costly multi-scale
inferences. In addition, we use curriculum learning with the hard negative
mining and Hide-and-Seek strategy to handle hard samples without losing
generality. Our proposed re-ranking network shows state-of-the-art performance
on several retrieval benchmarks with a significant margin (+12.6% in mAP on
ROxford-Hard+1M set) over state-of-the-art methods. The source code and models
are available online: https://github.com/sungonce/CVNet.
Related papers
- CricaVPR: Cross-image Correlation-aware Representation Learning for Visual Place Recognition [73.51329037954866]
We propose a robust global representation method with cross-image correlation awareness for visual place recognition.
Our method uses the attention mechanism to correlate multiple images within a batch.
Our method outperforms state-of-the-art methods by a large margin with significantly less training time.
arXiv Detail & Related papers (2024-02-29T15:05:11Z) - OsmLocator: locating overlapping scatter marks with a non-training
generative perspective [48.50108853199417]
Locating overlapping marks faces many difficulties such as no texture, less contextual information, hallow shape and tiny size.
Here, we formulate it as a optimization problem on clustering-based re-visualization from a non-training generative perspective.
We especially built a dataset named 2023 containing hundreds of scatter images with different markers and various levels of overlapping severity, and tested the proposed method and compared it to existing methods.
arXiv Detail & Related papers (2023-12-18T12:39:48Z) - Rank-Enhanced Low-Dimensional Convolution Set for Hyperspectral Image
Denoising [50.039949798156826]
This paper tackles the challenging problem of hyperspectral (HS) image denoising.
We propose rank-enhanced low-dimensional convolution set (Re-ConvSet)
We then incorporate Re-ConvSet into the widely-used U-Net architecture to construct an HS image denoising method.
arXiv Detail & Related papers (2022-07-09T13:35:12Z) - Cross-Modality Sub-Image Retrieval using Contrastive Multimodal Image
Representations [3.3754780158324564]
Cross-modality image retrieval is challenging, since images of similar (or even the same) content captured by different modalities might share few common structures.
We propose a new application-independent content-based image retrieval system for reverse (sub-)image search across modalities.
arXiv Detail & Related papers (2022-01-10T19:04:28Z) - Deep Relational Metric Learning [84.95793654872399]
This paper presents a deep relational metric learning framework for image clustering and retrieval.
We learn an ensemble of features that characterizes an image from different aspects to model both interclass and intraclass distributions.
Experiments on the widely-used CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate that our framework improves existing deep metric learning methods and achieves very competitive results.
arXiv Detail & Related papers (2021-08-23T09:31:18Z) - AugNet: End-to-End Unsupervised Visual Representation Learning with
Image Augmentation [3.6790362352712873]
We propose AugNet, a new deep learning training paradigm to learn image features from a collection of unlabeled pictures.
Our experiments demonstrate that the method is able to represent the image in low dimensional space.
Unlike many deep-learning-based image retrieval algorithms, our approach does not require access to external annotated datasets.
arXiv Detail & Related papers (2021-06-11T09:02:30Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - Learning to Focus: Cascaded Feature Matching Network for Few-shot Image
Recognition [38.49419948988415]
Deep networks can learn to accurately recognize objects of a category by training on a large number of images.
A meta-learning challenge known as a low-shot image recognition task comes when only a few images with annotations are available for learning a recognition model for one category.
Our method, called Cascaded Feature Matching Network (CFMN), is proposed to solve this problem.
Experiments for few-shot learning on two standard datasets, emphminiImageNet and Omniglot, have confirmed the effectiveness of our method.
arXiv Detail & Related papers (2021-01-13T11:37:28Z) - Image Retrieval for Structure-from-Motion via Graph Convolutional
Network [13.040952255039702]
We present a novel retrieval method based on Graph Convolutional Network (GCN) to generate accurate pairwise matches without costly redundancy.
By constructing a subgraph surrounding the query image as input data, we adopt a learnable GCN to exploit whether nodes in the subgraph have overlapping regions with the query photograph.
Experiments demonstrate that our method performs remarkably well on the challenging dataset of highly ambiguous and duplicated scenes.
arXiv Detail & Related papers (2020-09-17T04:03:51Z) - High-Order Information Matters: Learning Relation and Topology for
Occluded Person Re-Identification [84.43394420267794]
We propose a novel framework by learning high-order relation and topology information for discriminative features and robust alignment.
Our framework significantly outperforms state-of-the-art by6.5%mAP scores on Occluded-Duke dataset.
arXiv Detail & Related papers (2020-03-18T12:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.