Unlocking Pixels for Reinforcement Learning via Implicit Attention
- URL: http://arxiv.org/abs/2102.04353v1
- Date: Mon, 8 Feb 2021 17:00:26 GMT
- Title: Unlocking Pixels for Reinforcement Learning via Implicit Attention
- Authors: Krzysztof Choromanski, Deepali Jain, Jack Parker-Holder, Xingyou Song,
Valerii Likhosherstov, Anirban Santara, Aldo Pacchiano, Yunhao Tang, Adrian
Weller
- Abstract summary: We make use of new efficient attention algorithms, recently shown to be highly effective for Transformers.
This allows our attention-based controllers to scale to larger visual inputs, and facilitate the use of smaller patches.
In addition, we propose a new efficient algorithm approximating softmax attention with what we call hybrid random features.
- Score: 61.666538764049854
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: There has recently been significant interest in training reinforcement
learning (RL) agents in vision-based environments. This poses many challenges,
such as high dimensionality and potential for observational overfitting through
spurious correlations. A promising approach to solve both of these problems is
a self-attention bottleneck, which provides a simple and effective framework
for learning high performing policies, even in the presence of distractions.
However, due to poor scalability of attention architectures, these methods do
not scale beyond low resolution visual inputs, using large patches (thus small
attention matrices). In this paper we make use of new efficient attention
algorithms, recently shown to be highly effective for Transformers, and
demonstrate that these new techniques can be applied in the RL setting. This
allows our attention-based controllers to scale to larger visual inputs, and
facilitate the use of smaller patches, even individual pixels, improving
generalization. In addition, we propose a new efficient algorithm approximating
softmax attention with what we call hybrid random features, leveraging the
theory of angular kernels. We show theoretically and empirically that hybrid
random features is a promising approach when using attention for vision-based
RL.
Related papers
- Anchor Attention, Small Cache: Code Generation with Large Language Models [15.94784908771546]
Current practices in NLP often use sparse attention which may, unfortunately, lead to substantial inaccuracies, or hallucinations, in code generation tasks.
We propose a novel approach, AnchorCoder, which features token-wise anchor attention designed to extract and compress contextual information.
It can consistently achieve a significant (at least 70%) reduction in KV cache requirements, while preserving the majority of model's performance.
arXiv Detail & Related papers (2024-11-11T02:47:05Z) - Hybrid Dynamic Pruning: A Pathway to Efficient Transformer Inference [1.0919012968294923]
We introduce a novel algorithm-architecture co-design approach that accelerates transformers using head sparsity, block sparsity and approximation opportunities to reduce computations in attention and reduce memory access.
With the observation of the huge redundancy in attention scores and attention heads, we propose a novel integer-based row-balanced block pruning to prune unimportant blocks in the attention matrix at run time.
Also propose integer-based head pruning to detect and prune unimportant heads at an early stage at run time.
arXiv Detail & Related papers (2024-07-17T11:15:16Z) - Fortify the Shortest Stave in Attention: Enhancing Context Awareness of Large Language Models for Effective Tool Use [74.72150542395487]
An inherent waveform pattern in the attention allocation of large language models (LLMs) significantly affects their performance in tasks demanding a high degree of context awareness.
To address this issue, we propose a novel inference method named Attention Buckets.
arXiv Detail & Related papers (2023-12-07T17:24:51Z) - Sample Less, Learn More: Efficient Action Recognition via Frame Feature
Restoration [59.6021678234829]
We propose a novel method to restore the intermediate features for two sparsely sampled and adjacent video frames.
With the integration of our method, the efficiency of three commonly used baselines has been improved by over 50%, with a mere 0.5% reduction in recognition accuracy.
arXiv Detail & Related papers (2023-07-27T13:52:42Z) - RFAConv: Innovating Spatial Attention and Standard Convolutional Operation [7.2646541547165056]
We propose a novel attention mechanism called Receptive-Field Attention (RFA)
RFA focuses on the receptive-field spatial feature but also provides effective attention weights for large-size convolutional kernels.
It offers nearly negligible increment of computational cost and parameters, while significantly improving network performance.
arXiv Detail & Related papers (2023-04-06T16:21:56Z) - Rethinking Query-Key Pairwise Interactions in Vision Transformers [5.141895475956681]
We propose key-only attention, which excludes query-key pairwise interactions and uses a compute-efficient saliency-gate to obtain attention weights.
We develop a new self-attention model family, LinGlos, which reach state-of-the-art accuracies on the parameter-limited setting of ImageNet classification benchmark.
arXiv Detail & Related papers (2022-07-01T03:36:49Z) - CCLF: A Contrastive-Curiosity-Driven Learning Framework for
Sample-Efficient Reinforcement Learning [56.20123080771364]
We develop a model-agnostic Contrastive-Curiosity-Driven Learning Framework (CCLF) for reinforcement learning.
CCLF fully exploit sample importance and improve learning efficiency in a self-supervised manner.
We evaluate this approach on the DeepMind Control Suite, Atari, and MiniGrid benchmarks.
arXiv Detail & Related papers (2022-05-02T14:42:05Z) - Counterfactual Attention Learning for Fine-Grained Visual Categorization
and Re-identification [101.49122450005869]
We present a counterfactual attention learning method to learn more effective attention based on causal inference.
Specifically, we analyze the effect of the learned visual attention on network prediction.
We evaluate our method on a wide range of fine-grained recognition tasks.
arXiv Detail & Related papers (2021-08-19T14:53:40Z) - Data-Informed Global Sparseness in Attention Mechanisms for Deep Neural Networks [33.07113523598028]
We propose Attention Pruning (AP), a framework that observes attention patterns in a fixed dataset and generates a global sparseness mask.
AP saves 90% of attention computation for language modeling and about 50% for machine translation and GLUE tasks, maintaining result quality.
arXiv Detail & Related papers (2020-11-20T13:58:21Z) - Cost-effective Interactive Attention Learning with Neural Attention
Processes [79.8115563067513]
We propose a novel interactive learning framework which we refer to as Interactive Attention Learning (IAL)
IAL is prone to overfitting due to scarcity of human annotations, and requires costly retraining.
We tackle these challenges by proposing a sample-efficient attention mechanism and a cost-effective reranking algorithm for instances and features.
arXiv Detail & Related papers (2020-06-09T17:36:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.