Fast and Incremental Loop Closure Detection with Deep Features and
Proximity Graphs
- URL: http://arxiv.org/abs/2010.11703v2
- Date: Sun, 2 Jan 2022 13:49:52 GMT
- Title: Fast and Incremental Loop Closure Detection with Deep Features and
Proximity Graphs
- Authors: Shan An, Haogang Zhu, Dong Wei, Konstantinos A. Tsintotas, Antonios
Gasteratos
- Abstract summary: This article proposes an appearance-based loop closure detection pipeline named FILD++"
The system is fed by consecutive images and, via passing them twice through a single convolutional neural network, global and local deep features are extracted.
An image-to-image pairing follows, which exploits local features to evaluate the spatial information.
- Score: 13.328790865796224
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, the robotics community has extensively examined methods
concerning the place recognition task within the scope of simultaneous
localization and mapping applications.This article proposes an appearance-based
loop closure detection pipeline named ``FILD++" (Fast and Incremental Loop
closure Detection).First, the system is fed by consecutive images and, via
passing them twice through a single convolutional neural network, global and
local deep features are extracted.Subsequently, a hierarchical navigable
small-world graph incrementally constructs a visual database representing the
robot's traversed path based on the computed global features.Finally, a query
image, grabbed each time step, is set to retrieve similar locations on the
traversed route.An image-to-image pairing follows, which exploits local
features to evaluate the spatial information. Thus, in the proposed article, we
propose a single network for global and local feature extraction in contrast to
our previous work (FILD), while an exhaustive search for the verification
process is adopted over the generated deep local features avoiding the
utilization of hash codes. Exhaustive experiments on eleven publicly available
datasets exhibit the system's high performance (achieving the highest recall
score on eight of them) and low execution times (22.05 ms on average in New
College, which is the largest one containing 52480 images) compared to other
state-of-the-art approaches.
Related papers
- Deep Homography Estimation for Visual Place Recognition [49.235432979736395]
We propose a transformer-based deep homography estimation (DHE) network.
It takes the dense feature map extracted by a backbone network as input and fits homography for fast and learnable geometric verification.
Experiments on benchmark datasets show that our method can outperform several state-of-the-art methods.
arXiv Detail & Related papers (2024-02-25T13:22:17Z) - AANet: Aggregation and Alignment Network with Semi-hard Positive Sample
Mining for Hierarchical Place Recognition [48.043749855085025]
Visual place recognition (VPR) is one of the research hotspots in robotics, which uses visual information to locate robots.
We present a unified network capable of extracting global features for retrieving candidates via an aggregation module.
We also propose a Semi-hard Positive Sample Mining (ShPSM) strategy to select appropriate hard positive images for training more robust VPR networks.
arXiv Detail & Related papers (2023-10-08T14:46:11Z) - Yes, we CANN: Constrained Approximate Nearest Neighbors for local
feature-based visual localization [2.915868985330569]
Constrained Approximate Nearest Neighbors (CANN) is a joint solution of k-nearest-neighbors across both the geometry and appearance space using only local features.
Our method significantly outperforms both state-of-the-art global feature-based retrieval and approaches using local feature aggregation schemes.
arXiv Detail & Related papers (2023-06-15T10:12:10Z) - Delving into Sequential Patches for Deepfake Detection [64.19468088546743]
Recent advances in face forgery techniques produce nearly untraceable deepfake videos, which could be leveraged with malicious intentions.
Previous studies has identified the importance of local low-level cues and temporal information in pursuit to generalize well across deepfake methods.
We propose the Local- & Temporal-aware Transformer-based Deepfake Detection framework, which adopts a local-to-global learning protocol.
arXiv Detail & Related papers (2022-07-06T16:46:30Z) - Reuse your features: unifying retrieval and feature-metric alignment [3.845387441054033]
DRAN is the first network able to produce the features for the three steps of visual localization.
It achieves competitive performance in terms of robustness and accuracy under challenging conditions in public benchmarks.
arXiv Detail & Related papers (2022-04-13T10:42:00Z) - MD-CSDNetwork: Multi-Domain Cross Stitched Network for Deepfake
Detection [80.83725644958633]
Current deepfake generation methods leave discriminative artifacts in the frequency spectrum of fake images and videos.
We present a novel approach, termed as MD-CSDNetwork, for combining the features in the spatial and frequency domains to mine a shared discriminative representation.
arXiv Detail & Related papers (2021-09-15T14:11:53Z) - Watching You: Global-guided Reciprocal Learning for Video-based Person
Re-identification [82.6971648465279]
We propose a novel Global-guided Reciprocal Learning framework for video-based person Re-ID.
Our approach can achieve better performance than other state-of-the-art approaches.
arXiv Detail & Related papers (2021-03-07T12:27:42Z) - Deep Learning based Person Re-identification [2.9631016562930546]
We propose an efficient hierarchical re-identification approach in which color histogram based comparison is first employed to find the closest matches in the gallery set.
A silhouette part-based feature extraction scheme is adopted in each level of hierarchy to preserve the relative locations of the different body structures.
Results reveal that it outperforms most state-of-the-art approaches in terms of overall accuracy.
arXiv Detail & Related papers (2020-05-07T07:30:28Z) - Dense Residual Network: Enhancing Global Dense Feature Flow for
Character Recognition [75.4027660840568]
This paper explores how to enhance the local and global dense feature flow by exploiting hierarchical features fully from all the convolution layers.
Technically, we propose an efficient and effective CNN framework, i.e., Fast Dense Residual Network (FDRN) for text recognition.
arXiv Detail & Related papers (2020-01-23T06:55:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.