Selective Particle Attention: Visual Feature-Based Attention in Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2008.11491v1
- Date: Wed, 26 Aug 2020 11:07:50 GMT
- Title: Selective Particle Attention: Visual Feature-Based Attention in Deep
Reinforcement Learning
- Authors: Sam Blakeman, Denis Mareschal
- Abstract summary: We focus on one particular form of visual attention known as feature-based attention.
Visual feature-based attention has been proposed to improve the efficiency of Reinforcement Learning.
We propose a novel algorithm, termed Selective Particle Attention (SPA), which imbues a Deep RL agent with the ability to perform selective feature-based attention.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The human brain uses selective attention to filter perceptual input so that
only the components that are useful for behaviour are processed using its
limited computational resources. We focus on one particular form of visual
attention known as feature-based attention, which is concerned with identifying
features of the visual input that are important for the current task regardless
of their spatial location. Visual feature-based attention has been proposed to
improve the efficiency of Reinforcement Learning (RL) by reducing the
dimensionality of state representations and guiding learning towards relevant
features. Despite achieving human level performance in complex perceptual-motor
tasks, Deep RL algorithms have been consistently criticised for their poor
efficiency and lack of flexibility. Visual feature-based attention therefore
represents one option for addressing these criticisms. Nevertheless, it is
still an open question how the brain is able to learn which features to attend
to during RL. To help answer this question we propose a novel algorithm, termed
Selective Particle Attention (SPA), which imbues a Deep RL agent with the
ability to perform selective feature-based attention. SPA learns which
combinations of features to attend to based on their bottom-up saliency and how
accurately they predict future reward. We evaluate SPA on a multiple choice
task and a 2D video game that both involve raw pixel input and dynamic changes
to the task structure. We show various benefits of SPA over approaches that
naively attend to either all or random subsets of features. Our results
demonstrate (1) how visual feature-based attention in Deep RL models can
improve their learning efficiency and ability to deal with sudden changes in
task structure and (2) that particle filters may represent a viable
computational account of how visual feature-based attention occurs in the
brain.
Related papers
- Shifting Focus with HCEye: Exploring the Dynamics of Visual Highlighting and Cognitive Load on User Attention and Saliency Prediction [3.2873782624127834]
This paper examines the joint impact of visual highlighting (permanent and dynamic) and dual-task-induced cognitive load on gaze behaviour.
We show that state-of-the-art saliency models increase their performance when accounting for different cognitive loads.
arXiv Detail & Related papers (2024-04-22T14:45:30Z) - ResMatch: Residual Attention Learning for Local Feature Matching [51.07496081296863]
We rethink cross- and self-attention from the viewpoint of traditional feature matching and filtering.
We inject the similarity of descriptors and relative positions into cross- and self-attention score.
We mine intra- and inter-neighbors according to the similarity of descriptors and relative positions.
arXiv Detail & Related papers (2023-07-11T11:32:12Z) - Learning Task-relevant Representations for Generalization via
Characteristic Functions of Reward Sequence Distributions [63.773813221460614]
Generalization across different environments with the same tasks is critical for successful applications of visual reinforcement learning.
We propose a novel approach, namely Characteristic Reward Sequence Prediction (CRESP), to extract the task-relevant information.
Experiments demonstrate that CRESP significantly improves the performance of generalization on unseen environments.
arXiv Detail & Related papers (2022-05-20T14:52:03Z) - Dual Cross-Attention Learning for Fine-Grained Visual Categorization and
Object Re-Identification [19.957957963417414]
We propose a dual cross-attention learning (DCAL) algorithm to coordinate with self-attention learning.
First, we propose global-local cross-attention (GLCA) to enhance the interactions between global images and local high-response regions.
Second, we propose pair-wise cross-attention (PWCA) to establish the interactions between image pairs.
arXiv Detail & Related papers (2022-05-04T16:14:26Z) - Counterfactual Attention Learning for Fine-Grained Visual Categorization
and Re-identification [101.49122450005869]
We present a counterfactual attention learning method to learn more effective attention based on causal inference.
Specifically, we analyze the effect of the learned visual attention on network prediction.
We evaluate our method on a wide range of fine-grained recognition tasks.
arXiv Detail & Related papers (2021-08-19T14:53:40Z) - Understanding top-down attention using task-oriented ablation design [0.22940141855172028]
Top-down attention allows neural networks, both artificial and biological, to focus on the information most relevant for a given task.
We aim to answer this with a computational experiment based on a general framework called task-oriented ablation design.
We compare the performance of two neural networks, one with top-down attention and one without.
arXiv Detail & Related papers (2021-06-08T21:01:47Z) - Unlocking Pixels for Reinforcement Learning via Implicit Attention [61.666538764049854]
We make use of new efficient attention algorithms, recently shown to be highly effective for Transformers.
This allows our attention-based controllers to scale to larger visual inputs, and facilitate the use of smaller patches.
In addition, we propose a new efficient algorithm approximating softmax attention with what we call hybrid random features.
arXiv Detail & Related papers (2021-02-08T17:00:26Z) - Deep Reinforced Attention Learning for Quality-Aware Visual Recognition [73.15276998621582]
We build upon the weakly-supervised generation mechanism of intermediate attention maps in any convolutional neural networks.
We introduce a meta critic network to evaluate the quality of attention maps in the main network.
arXiv Detail & Related papers (2020-07-13T02:44:38Z) - Attention or memory? Neurointerpretable agents in space and time [0.0]
We design a model incorporating a self-attention mechanism that implements task-state representations in semantic feature-space.
To evaluate the agent's selective properties, we add a large volume of task-irrelevant features to observations.
In line with neuroscience predictions, self-attention leads to increased robustness to noise compared to benchmark models.
arXiv Detail & Related papers (2020-07-09T15:04:26Z) - Towards Efficient Processing and Learning with Spikes: New Approaches
for Multi-Spike Learning [59.249322621035056]
We propose two new multi-spike learning rules which demonstrate better performance over other baselines on various tasks.
In the feature detection task, we re-examine the ability of unsupervised STDP with its limitations being presented.
Our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied.
arXiv Detail & Related papers (2020-05-02T06:41:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.