Constant Memory Attention Block
- URL: http://arxiv.org/abs/2306.12599v1
- Date: Wed, 21 Jun 2023 22:41:58 GMT
- Title: Constant Memory Attention Block
- Authors: Leo Feng, Frederick Tung, Hossein Hajimirsadeghi, Yoshua Bengio,
Mohamed Osama Ahmed
- Abstract summary: Constant Memory Attention Block (CMAB) is a novel general-purpose attention block that computes its output in constant memory and performs updates in constant computation.
We show our proposed methods achieve results competitive with state-of-the-art while being significantly more memory efficient.
- Score: 74.38724530521277
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern foundation model architectures rely on attention mechanisms to
effectively capture context. However, these methods require linear or quadratic
memory in terms of the number of inputs/datapoints, limiting their
applicability in low-compute domains. In this work, we propose Constant Memory
Attention Block (CMAB), a novel general-purpose attention block that computes
its output in constant memory and performs updates in constant computation.
Highlighting CMABs efficacy, we introduce methods for Neural Processes and
Temporal Point Processes. Empirically, we show our proposed methods achieve
results competitive with state-of-the-art while being significantly more memory
efficient.
Related papers
- Hybrid Dynamic Pruning: A Pathway to Efficient Transformer Inference [1.0919012968294923]
We introduce a novel algorithm-architecture co-design approach that accelerates transformers using head sparsity, block sparsity and approximation opportunities to reduce computations in attention and reduce memory access.
With the observation of the huge redundancy in attention scores and attention heads, we propose a novel integer-based row-balanced block pruning to prune unimportant blocks in the attention matrix at run time.
Also propose integer-based head pruning to detect and prune unimportant heads at an early stage at run time.
arXiv Detail & Related papers (2024-07-17T11:15:16Z) - CItruS: Chunked Instruction-aware State Eviction for Long Sequence Modeling [52.404072802235234]
We introduce Chunked Instruction-aware State Eviction (CItruS), a novel modeling technique that integrates the attention preferences useful for a downstream task into the eviction process of hidden states.
Our training-free method exhibits superior performance on long sequence comprehension and retrieval tasks over several strong baselines under the same memory budget.
arXiv Detail & Related papers (2024-06-17T18:34:58Z) - MeMSVD: Long-Range Temporal Structure Capturing Using Incremental SVD [27.472705540825316]
This paper is on long-term video understanding where the goal is to recognise human actions over long temporal windows (up to minutes long)
We propose an alternative to attention-based schemes which is based on a low-rank approximation of the memory obtained using Singular Value Decomposition.
Our scheme has two advantages: (a) it reduces complexity by more than an order of magnitude, and (b) it is amenable to an efficient implementation for the calculation of the memory bases.
arXiv Detail & Related papers (2024-06-11T12:03:57Z) - Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention [6.713196608291278]
This work introduces an efficient method to scale Transformer-based Large Language Models to infinitely long inputs with bounded memory and computation.
A key component in our proposed approach is a new attention technique dubbed Infini-attention.
arXiv Detail & Related papers (2024-04-10T16:18:42Z) - Accurate Block Quantization in LLMs with Outliers [0.6138671548064355]
The demand for inference on extremely large scale LLMs has seen enormous growth in recent months.
The problem is aggravated by the exploding raise in the lengths of the sequences being processed.
Various quantization techniques have been proposed that allow accurate quantization for both weights and activations.
arXiv Detail & Related papers (2024-03-29T12:15:06Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Joint Modeling of Feature, Correspondence, and a Compressed Memory for
Video Object Segmentation [52.11279360934703]
Current prevailing Video Object (VOS) methods usually perform dense matching between the current and reference frames after extracting features.
We propose a unified VOS framework, coined as JointFormer, for joint modeling of the three elements of feature, correspondence, and a compressed memory.
arXiv Detail & Related papers (2023-08-25T17:30:08Z) - Memory Efficient Neural Processes via Constant Memory Attention Block [55.82269384896986]
Constant Memory Attentive Neural Processes (CMANPs) are an NP variant that only requires constant memory.
We show CMANPs achieve state-of-the-art results on popular NP benchmarks while being significantly more memory efficient than prior methods.
arXiv Detail & Related papers (2023-05-23T23:10:19Z) - Kanerva++: extending The Kanerva Machine with differentiable, locally
block allocated latent memory [75.65949969000596]
Episodic and semantic memory are critical components of the human memory model.
We develop a new principled Bayesian memory allocation scheme that bridges the gap between episodic and semantic memory.
We demonstrate that this allocation scheme improves performance in memory conditional image generation.
arXiv Detail & Related papers (2021-02-20T18:40:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.