Improving Robot Localisation by Ignoring Visual Distraction
- URL: http://arxiv.org/abs/2107.11857v1
- Date: Sun, 25 Jul 2021 17:45:17 GMT
- Title: Improving Robot Localisation by Ignoring Visual Distraction
- Authors: Oscar Mendez, Matthew Vowels, Richard Bowden
- Abstract summary: We introduce Neural Blindness, which gives an agent the ability to completely ignore objects or classes that are deemed distractors.
More explicitly, we aim to render a neural network completely incapable of representing specific chosen classes in its latent space.
In a very real sense, this makes the network "blind" to certain classes, allowing and agent to focus on what is important for a given task, and demonstrates how this can be used to improve localisation.
- Score: 34.8860186009308
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Attention is an important component of modern deep learning. However, less
emphasis has been put on its inverse: ignoring distraction. Our daily lives
require us to explicitly avoid giving attention to salient visual features that
confound the task we are trying to accomplish. This visual prioritisation
allows us to concentrate on important tasks while ignoring visual distractors.
In this work, we introduce Neural Blindness, which gives an agent the ability
to completely ignore objects or classes that are deemed distractors. More
explicitly, we aim to render a neural network completely incapable of
representing specific chosen classes in its latent space. In a very real sense,
this makes the network "blind" to certain classes, allowing and agent to focus
on what is important for a given task, and demonstrates how this can be used to
improve localisation.
Related papers
- On the Surprising Effectiveness of Attention Transfer for Vision Transformers [118.83572030360843]
Conventional wisdom suggests that pre-training Vision Transformers (ViT) improves downstream performance by learning useful representations.
We investigate this question and find that the features and representations learned during pre-training are not essential.
arXiv Detail & Related papers (2024-11-14T18:59:40Z) - Visual Attention Network [90.0753726786985]
We propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention.
We also introduce a novel neural network based on LKA, namely Visual Attention Network (VAN)
VAN outperforms the state-of-the-art vision transformers and convolutional neural networks with a large margin in extensive experiments.
arXiv Detail & Related papers (2022-02-20T06:35:18Z) - Learning to ignore: rethinking attention in CNNs [87.01305532842878]
We propose to reformulate the attention mechanism in CNNs to learn to ignore instead of learning to attend.
Specifically, we propose to explicitly learn irrelevant information in the scene and suppress it in the produced representation.
arXiv Detail & Related papers (2021-11-10T13:47:37Z) - Counterfactual Attention Learning for Fine-Grained Visual Categorization
and Re-identification [101.49122450005869]
We present a counterfactual attention learning method to learn more effective attention based on causal inference.
Specifically, we analyze the effect of the learned visual attention on network prediction.
We evaluate our method on a wide range of fine-grained recognition tasks.
arXiv Detail & Related papers (2021-08-19T14:53:40Z) - Understanding top-down attention using task-oriented ablation design [0.22940141855172028]
Top-down attention allows neural networks, both artificial and biological, to focus on the information most relevant for a given task.
We aim to answer this with a computational experiment based on a general framework called task-oriented ablation design.
We compare the performance of two neural networks, one with top-down attention and one without.
arXiv Detail & Related papers (2021-06-08T21:01:47Z) - Focus Longer to See Better:Recursively Refined Attention for
Fine-Grained Image Classification [148.4492675737644]
Deep Neural Network has shown great strides in the coarse-grained image classification task.
In this paper, we try to focus on these marginal differences to extract more representative features.
Our network repetitively focuses on parts of images to spot small discriminative parts among the classes.
arXiv Detail & Related papers (2020-05-22T03:14:18Z) - Neuroevolution of Self-Interpretable Agents [11.171154483167514]
Inattentional blindness is the psychological phenomenon that causes one to miss things in plain sight.
Motivated by selective attention, we study the properties of artificial agents that perceive the world through the lens of a self-attention bottleneck.
arXiv Detail & Related papers (2020-03-18T11:40:35Z) - The perceptual boost of visual attention is task-dependent in
naturalistic settings [5.735035463793008]
We design a collection of visual tasks, each consisting of classifying images from a chosen task set.
The nature of a task is determined by which categories are included in the task set.
On each task we train an attention-augmented neural network and then compare its accuracy to that of a baseline network.
We show that the perceptual boost of attention is stronger with increasing task-set difficulty, weaker with increasing task-set size and weaker with increasing perceptual similarity within a task set.
arXiv Detail & Related papers (2020-02-22T09:10:24Z) - The Costs and Benefits of Goal-Directed Attention in Deep Convolutional
Neural Networks [6.445605125467574]
People deploy top-down, goal-directed attention to accomplish tasks, such as finding lost keys.
Motivated by selective attention in categorisation models, we developed a goal-directed attention mechanism that can process naturalistic (photographic) stimuli.
Our attentional mechanism incorporates top-down influences from prefrontal cortex (PFC) to support goal-directed behaviour.
arXiv Detail & Related papers (2020-02-06T16:42:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.