VDNA-PR: Using General Dataset Representations for Robust Sequential Visual Place Recognition
- URL: http://arxiv.org/abs/2403.09025v1
- Date: Thu, 14 Mar 2024 01:30:28 GMT
- Title: VDNA-PR: Using General Dataset Representations for Robust Sequential Visual Place Recognition
- Authors: Benjamin Ramtoula, Daniele De Martini, Matthew Gadd, Paul Newman,
- Abstract summary: This paper adapts a general dataset representation technique to produce robust Visual Place Recognition (VPR) descriptors.
Our experiments show that our representation can allow for better robustness than current solutions to serious domain shifts away from the training data distribution.
- Score: 17.393105901701098
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper adapts a general dataset representation technique to produce robust Visual Place Recognition (VPR) descriptors, crucial to enable real-world mobile robot localisation. Two parallel lines of work on VPR have shown, on one side, that general-purpose off-the-shelf feature representations can provide robustness to domain shifts, and, on the other, that fused information from sequences of images improves performance. In our recent work on measuring domain gaps between image datasets, we proposed a Visual Distribution of Neuron Activations (VDNA) representation to represent datasets of images. This representation can naturally handle image sequences and provides a general and granular feature representation derived from a general-purpose model. Moreover, our representation is based on tracking neuron activation values over the list of images to represent and is not limited to a particular neural network layer, therefore having access to high- and low-level concepts. This work shows how VDNAs can be used for VPR by learning a very lightweight and simple encoder to generate task-specific descriptors. Our experiments show that our representation can allow for better robustness than current solutions to serious domain shifts away from the training data distribution, such as to indoor environments and aerial imagery.
Related papers
- See then Tell: Enhancing Key Information Extraction with Vision Grounding [54.061203106565706]
We introduce STNet (See then Tell Net), a novel end-to-end model designed to deliver precise answers with relevant vision grounding.
To enhance the model's seeing capabilities, we collect extensive structured table recognition datasets.
arXiv Detail & Related papers (2024-09-29T06:21:05Z) - Efficient Visual State Space Model for Image Deblurring [83.57239834238035]
Convolutional neural networks (CNNs) and Vision Transformers (ViTs) have achieved excellent performance in image restoration.
We propose a simple yet effective visual state space model (EVSSM) for image deblurring.
arXiv Detail & Related papers (2024-05-23T09:13:36Z) - Collaborative Visual Place Recognition through Federated Learning [5.06570397863116]
Visual Place Recognition (VPR) aims to estimate the location of an image by treating it as a retrieval problem.
VPR uses a database of geo-tagged images and leverages deep neural networks to extract a global representation, called descriptor, from each image.
This research revisits the task of VPR through the lens of Federated Learning (FL), addressing several key challenges associated with this adaptation.
arXiv Detail & Related papers (2024-04-20T08:48:37Z) - Neural Clustering based Visual Representation Learning [61.72646814537163]
Clustering is one of the most classic approaches in machine learning and data analysis.
We propose feature extraction with clustering (FEC), which views feature extraction as a process of selecting representatives from data.
FEC alternates between grouping pixels into individual clusters to abstract representatives and updating the deep features of pixels with current representatives.
arXiv Detail & Related papers (2024-03-26T06:04:50Z) - CricaVPR: Cross-image Correlation-aware Representation Learning for Visual Place Recognition [73.51329037954866]
We propose a robust global representation method with cross-image correlation awareness for visual place recognition.
Our method uses the attention mechanism to correlate multiple images within a batch.
Our method outperforms state-of-the-art methods by a large margin with significantly less training time.
arXiv Detail & Related papers (2024-02-29T15:05:11Z) - ClusVPR: Efficient Visual Place Recognition with Clustering-based
Weighted Transformer [13.0858576267115]
We present ClusVPR, a novel approach that tackles the specific issues of redundant information in duplicate regions and representations of small objects.
ClusVPR introduces a unique paradigm called Clustering-based weighted Transformer Network (CWTNet)
We also introduce the optimized-VLAD layer that significantly reduces the number of parameters and enhances model efficiency.
arXiv Detail & Related papers (2023-10-06T09:01:15Z) - Adaptive Generation of Privileged Intermediate Information for
Visible-Infrared Person Re-Identification [11.93952924941977]
This paper introduces the Adaptive Generation of Privileged Intermediate Information training approach.
AGPI2 is introduced to adapt and generate a virtual domain that bridges discriminant information between the V and I modalities.
Experimental results conducted on challenging V-I ReID indicate that AGPI2 increases matching accuracy without extra computational resources.
arXiv Detail & Related papers (2023-07-06T18:08:36Z) - Autoencoders with Intrinsic Dimension Constraints for Learning Low
Dimensional Image Representations [27.40298734517967]
We propose a novel deep representation learning approach with autoencoder, which incorporates regularization of the global and local ID constraints into the reconstruction of data representations.
This approach not only preserves the global manifold structure of the whole dataset, but also maintains the local manifold structure of the feature maps of each point.
arXiv Detail & Related papers (2023-04-16T03:43:08Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.