Online Visual Place Recognition via Saliency Re-identification
- URL: http://arxiv.org/abs/2007.14549v1
- Date: Wed, 29 Jul 2020 01:53:45 GMT
- Title: Online Visual Place Recognition via Saliency Re-identification
- Authors: Han Wang, Chen Wang and Lihua Xie
- Abstract summary: Existing methods often formulate visual place recognition as feature matching.
Inspired by the fact that human beings always recognize a place by remembering salient regions or landmarks, we formulate visual place recognition as saliency re-identification.
In the meanwhile, we propose to perform both saliency detection and re-identification in frequency domain, in which all operations become element-wise.
- Score: 26.209412893744094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As an essential component of visual simultaneous localization and mapping
(SLAM), place recognition is crucial for robot navigation and autonomous
driving. Existing methods often formulate visual place recognition as feature
matching, which is computationally expensive for many robotic applications with
limited computing power, e.g., autonomous driving and cleaning robot. Inspired
by the fact that human beings always recognize a place by remembering salient
regions or landmarks that are more attractive or interesting than others, we
formulate visual place recognition as saliency re-identification. In the
meanwhile, we propose to perform both saliency detection and re-identification
in frequency domain, in which all operations become element-wise. The
experiments show that our proposed method achieves competitive accuracy and
much higher speed than the state-of-the-art feature-based methods. The proposed
method is open-sourced and available at
https://github.com/wh200720041/SRLCD.git.
Related papers
- Exploring Emerging Trends and Research Opportunities in Visual Place Recognition [28.76562316749074]
Visual-based recognition is a long-standing challenge in computer vision and robotics communities.
Visual place recognition is vital for most localization implementations.
Researchers have recently turned their attention to vision-language models.
arXiv Detail & Related papers (2024-11-18T11:36:17Z) - Learning Where to Look: Self-supervised Viewpoint Selection for Active Localization using Geometrical Information [68.10033984296247]
This paper explores the domain of active localization, emphasizing the importance of viewpoint selection to enhance localization accuracy.
Our contributions involve using a data-driven approach with a simple architecture designed for real-time operation, a self-supervised data training method, and the capability to consistently integrate our map into a planning framework tailored for real-world robotics applications.
arXiv Detail & Related papers (2024-07-22T12:32:09Z) - Apprenticeship-Inspired Elegance: Synergistic Knowledge Distillation Empowers Spiking Neural Networks for Efficient Single-Eye Emotion Recognition [53.359383163184425]
We introduce a novel multimodality synergistic knowledge distillation scheme tailored for efficient single-eye motion recognition tasks.
This method allows a lightweight, unimodal student spiking neural network (SNN) to extract rich knowledge from an event-frame multimodal teacher network.
arXiv Detail & Related papers (2024-06-20T07:24:47Z) - Emotion Recognition from the perspective of Activity Recognition [0.0]
Appraising human emotional states, behaviors, and reactions displayed in real-world settings can be accomplished using latent continuous dimensions.
For emotion recognition systems to be deployed and integrated into real-world mobile and computing devices, we need to consider data collected in the world.
We propose a novel three-stream end-to-end deep learning regression pipeline with an attention mechanism.
arXiv Detail & Related papers (2024-03-24T18:53:57Z) - General Place Recognition Survey: Towards the Real-world Autonomy Age [36.49196034588173]
The place recognition community has made astonishing progress over the last $20$ years.
Few methods have shown promising place recognition performance in complex real-world scenarios.
This paper can be a tutorial for researchers new to the place recognition community and those who care about long-term robotics autonomy.
arXiv Detail & Related papers (2022-09-09T19:37:05Z) - Scalable Vehicle Re-Identification via Self-Supervision [66.2562538902156]
Vehicle Re-Identification is one of the key elements in city-scale vehicle analytics systems.
Many state-of-the-art solutions for vehicle re-id mostly focus on improving the accuracy on existing re-id benchmarks and often ignore computational complexity.
We propose a simple yet effective hybrid solution empowered by self-supervised training which only uses a single network during inference time.
arXiv Detail & Related papers (2022-05-16T12:14:42Z) - A Spatio-Temporal Multilayer Perceptron for Gesture Recognition [70.34489104710366]
We propose a multilayer state-weighted perceptron for gesture recognition in the context of autonomous vehicles.
An evaluation of TCG and Drive&Act datasets is provided to showcase the promising performance of our approach.
We deploy our model to our autonomous vehicle to show its real-time capability and stable execution.
arXiv Detail & Related papers (2022-04-25T08:42:47Z) - SeekNet: Improved Human Instance Segmentation via Reinforcement Learning
Based Optimized Robot Relocation [17.4240390944016]
Amodal recognition is the ability of the system to detect occluded objects.
We propose SeekNet, an improved optimization method for amodal recognition through embodied visual recognition.
We also implement SeekNet for social robots, where there are multiple interactions with crowded humans.
arXiv Detail & Related papers (2020-11-17T15:03:30Z) - Cross-Task Transfer for Geotagged Audiovisual Aerial Scene Recognition [61.54648991466747]
We explore an audiovisual aerial scene recognition task using both images and sounds as input.
We show the benefit of exploiting the audio information for the aerial scene recognition.
arXiv Detail & Related papers (2020-05-18T04:14:16Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.