Discriminative and Semantic Feature Selection for Place Recognition
towards Dynamic Environments
- URL: http://arxiv.org/abs/2103.10023v2
- Date: Sun, 21 Mar 2021 03:35:24 GMT
- Title: Discriminative and Semantic Feature Selection for Place Recognition
towards Dynamic Environments
- Authors: Yuxin Tian, Jinyu MIao, Xingming Wu, Haosong Yue, Zhong Liu, Weihai
Chen
- Abstract summary: We propose a discriminative and semantic feature selection network, dubbed as DSFeat.
Supervised by both semantic information and attention mechanism, we can estimate pixel-wise stability of features.
It should be noticed that our proposal can be readily pluggable into any feature-based SLAM system.
- Score: 12.973423183330961
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Features play an important role in various visual tasks, especially in visual
place recognition applied in perceptual changing environments. In this paper,
we address the challenges of place recognition due to dynamics and confusable
patterns by proposing a discriminative and semantic feature selection network,
dubbed as DSFeat. Supervised by both semantic information and attention
mechanism, we can estimate pixel-wise stability of features, indicating the
probability of a static and stable region from which features are extracted,
and then select features that are insensitive to dynamic interference and
distinguishable to be correctly matched. The designed feature selection model
is evaluated in place recognition and SLAM system in several public datasets
with varying appearances and viewpoints. Experimental results conclude that the
effectiveness of the proposed method. It should be noticed that our proposal
can be readily pluggable into any feature-based SLAM system.
Related papers
- An Information Compensation Framework for Zero-Shot Skeleton-based Action Recognition [49.45660055499103]
Zero-shot human skeleton-based action recognition aims to construct a model that can recognize actions outside the categories seen during training.
Previous research has focused on aligning sequences' visual and semantic spatial distributions.
We introduce a new loss function sampling method to obtain a tight and robust representation.
arXiv Detail & Related papers (2024-06-02T06:53:01Z) - High-Discriminative Attribute Feature Learning for Generalized Zero-Shot Learning [54.86882315023791]
We propose an innovative approach called High-Discriminative Attribute Feature Learning for Generalized Zero-Shot Learning (HDAFL)
HDAFL utilizes multiple convolutional kernels to automatically learn discriminative regions highly correlated with attributes in images.
We also introduce a Transformer-based attribute discrimination encoder to enhance the discriminative capability among attributes.
arXiv Detail & Related papers (2024-04-07T13:17:47Z) - Selective Domain-Invariant Feature for Generalizable Deepfake Detection [21.671221284842847]
We propose a novel framework which reduces the sensitivity to face forgery by fusing content features and styles.
Both qualitative and quantitative results in existing benchmarks and proposals demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-03-19T13:09:19Z) - Reinforcement-based Display-size Selection for Frugal Satellite Image
Change Detection [5.656581242851759]
We introduce a novel interactive satellite image change detection algorithm based on active learning.
The proposed method is iterative and consists in frugally probing the user (oracle) about the labels of the most critical images.
arXiv Detail & Related papers (2023-12-28T11:14:43Z) - Learning Common Rationale to Improve Self-Supervised Representation for
Fine-Grained Visual Recognition Problems [61.11799513362704]
We propose learning an additional screening mechanism to identify discriminative clues commonly seen across instances and classes.
We show that a common rationale detector can be learned by simply exploiting the GradCAM induced from the SSL objective.
arXiv Detail & Related papers (2023-03-03T02:07:40Z) - Learning Diversified Feature Representations for Facial Expression
Recognition in the Wild [97.14064057840089]
We propose a mechanism to diversify the features extracted by CNN layers of state-of-the-art facial expression recognition architectures.
Experimental results on three well-known facial expression recognition in-the-wild datasets, AffectNet, FER+, and RAF-DB, show the effectiveness of our method.
arXiv Detail & Related papers (2022-10-17T19:25:28Z) - VFDS: Variational Foresight Dynamic Selection in Bayesian Neural
Networks for Efficient Human Activity Recognition [81.29900407096977]
Variational Foresight Dynamic Selection (VFDS) learns a policy that selects the next feature subset to observe.
We apply VFDS on the Human Activity Recognition (HAR) task where the performance-cost trade-off is critical in its practice.
arXiv Detail & Related papers (2022-03-31T22:52:43Z) - EEGminer: Discovering Interpretable Features of Brain Activity with
Learnable Filters [72.19032452642728]
We propose a novel differentiable EEG decoding pipeline consisting of learnable filters and a pre-determined feature extraction module.
We demonstrate the utility of our model towards emotion recognition from EEG signals on the SEED dataset and on a new EEG dataset of unprecedented size.
The discovered features align with previous neuroscience studies and offer new insights, such as marked differences in the functional connectivity profile between left and right temporal areas during music listening.
arXiv Detail & Related papers (2021-10-19T14:22:04Z) - Feature selection for gesture recognition in Internet-of-Things for
healthcare [10.155382321743181]
In the context of recognition of gestures, EEG and EMG could be simultaneously recorded to identify the gesture that is being accomplished, and the quality of its performance.
This paper proposes a new algorithm that aims (i) to robustly extract the most relevant features to classify different grasping tasks, and (ii) to retain the natural meaning of the selected features.
arXiv Detail & Related papers (2020-05-22T06:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.