Temporal Latent Bottleneck: Synthesis of Fast and Slow Processing
Mechanisms in Sequence Learning
- URL: http://arxiv.org/abs/2205.14794v1
- Date: Mon, 30 May 2022 00:12:33 GMT
- Title: Temporal Latent Bottleneck: Synthesis of Fast and Slow Processing
Mechanisms in Sequence Learning
- Authors: Aniket Didolkar, Kshitij Gupta, Anirudh Goyal, Alex Lamb, Nan Rosemary
Ke, Yoshua Bengio
- Abstract summary: Recurrent neural networks have a strong inductive bias towards learning temporally compressed representations.
Transformers have little inductive bias towards learning temporally compressed representations.
- Score: 85.95599675484341
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recurrent neural networks have a strong inductive bias towards learning
temporally compressed representations, as the entire history of a sequence is
represented by a single vector. By contrast, Transformers have little inductive
bias towards learning temporally compressed representations, as they allow for
attention over all previously computed elements in a sequence. Having a more
compressed representation of a sequence may be beneficial for generalization,
as a high-level representation may be more easily re-used and re-purposed and
will contain fewer irrelevant details. At the same time, excessive compression
of representations comes at the cost of expressiveness. We propose a solution
which divides computation into two streams. A slow stream that is recurrent in
nature aims to learn a specialized and compressed representation, by forcing
chunks of $K$ time steps into a single representation which is divided into
multiple vectors. At the same time, a fast stream is parameterized as a
Transformer to process chunks consisting of $K$ time-steps conditioned on the
information in the slow-stream. In the proposed approach we hope to gain the
expressiveness of the Transformer, while encouraging better compression and
structuring of representations in the slow stream. We show the benefits of the
proposed method in terms of improved sample efficiency and generalization
performance as compared to various competitive baselines for visual perception
and sequential decision making tasks.
Related papers
- PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting [82.03373838627606]
Self-attention mechanism in Transformer architecture requires positional embeddings to encode temporal order in time series prediction.
We argue that this reliance on positional embeddings restricts the Transformer's ability to effectively represent temporal sequences.
We present a model integrating PRE with a standard Transformer encoder, demonstrating state-of-the-art performance on various real-world datasets.
arXiv Detail & Related papers (2024-08-20T01:56:07Z) - Token Recycling for Efficient Sequential Inference with Vision
Transformers [3.9906557901972897]
Vision Transformers (ViTs) overpass Convolutional Neural Networks in processing incomplete inputs because they do not require the imputation of missing values.
ViTs are computationally inefficient because they perform a full forward pass each time a piece of new sequential information arrives.
We introduce the TOken REcycling (TORE) modification for the ViT inference, which can be used with any architecture.
arXiv Detail & Related papers (2023-11-26T15:39:57Z) - White-Box Transformers via Sparse Rate Reduction: Compression Is All There Is? [27.58916930770997]
We show a family of white-box transformer-like deep network architectures, named CRATE, which are mathematically fully interpretable.
Experiments show that these networks, despite their simplicity, indeed learn to compress and sparsify representations of large-scale real-world image and text datasets.
arXiv Detail & Related papers (2023-11-22T02:23:32Z) - Convolutions and More as Einsum: A Tensor Network Perspective with Advances for Second-Order Methods [2.8645507575980074]
We simplify convolutions by viewing them as tensor networks (TNs)
TNs allow reasoning about the underlying tensor multiplications by drawing diagrams, manipulating them to perform function transformations like differentiation, and efficiently evaluating them with einsum.
Our TN implementation accelerates KFAC variant up to 4.5x while removing the standard implementation's memory overhead, and enables new hardware-efficient dropouts for approximate backpropagation.
arXiv Detail & Related papers (2023-07-05T13:19:41Z) - TAPIR: Learning Adaptive Revision for Incremental Natural Language
Understanding with a Two-Pass Model [14.846377138993645]
Recent neural network-based approaches for incremental processing mainly use RNNs or Transformers.
A restart-incremental interface that repeatedly passes longer input prefixes can be used to obtain partial outputs, while providing the ability to revise.
We propose the Two-pass model for AdaPtIve Revision (TAPIR) and introduce a method to obtain an incremental supervision signal for learning an adaptive revision policy.
arXiv Detail & Related papers (2023-05-18T09:58:19Z) - Unified Multivariate Gaussian Mixture for Efficient Neural Image
Compression [151.3826781154146]
latent variables with priors and hyperpriors is an essential problem in variational image compression.
We find inter-correlations and intra-correlations exist when observing latent variables in a vectorized perspective.
Our model has better rate-distortion performance and an impressive $3.18times$ compression speed up.
arXiv Detail & Related papers (2022-03-21T11:44:17Z) - High-performance symbolic-numerics via multiple dispatch [52.77024349608834]
Symbolics.jl is an extendable symbolic system which uses dynamic multiple dispatch to change behavior depending on the domain needs.
We show that by formalizing a generic API on actions independent of implementation, we can retroactively add optimized data structures to our system.
We demonstrate the ability to swap between classical term-rewriting simplifiers and e-graph-based term-rewriting simplifiers.
arXiv Detail & Related papers (2021-05-09T14:22:43Z) - Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing [112.2208052057002]
We propose Funnel-Transformer which gradually compresses the sequence of hidden states to a shorter one.
With comparable or fewer FLOPs, Funnel-Transformer outperforms the standard Transformer on a wide variety of sequence-level prediction tasks.
arXiv Detail & Related papers (2020-06-05T05:16:23Z) - Addressing Some Limitations of Transformers with Feedback Memory [51.94640029417114]
Transformers have been successfully applied to sequential, auto-regressive tasks despite being feedforward networks.
We propose the Feedback Transformer architecture that exposes all previous representations to all future representations.
We demonstrate on a variety of benchmarks in language modeling, machine translation, and reinforcement learning that the increased representation capacity can create small, shallow models with much stronger performance than comparable Transformers.
arXiv Detail & Related papers (2020-02-21T16:37:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.