The Neural Data Router: Adaptive Control Flow in Transformers Improves
Systematic Generalization
- URL: http://arxiv.org/abs/2110.07732v1
- Date: Thu, 14 Oct 2021 21:24:27 GMT
- Title: The Neural Data Router: Adaptive Control Flow in Transformers Improves
Systematic Generalization
- Authors: R\'obert Csord\'as, Kazuki Irie, J\"urgen Schmidhuber
- Abstract summary: We propose two modifications to the Transformer architecture, copy gate and geometric attention.
Our novel Neural Data Router (NDR) achieves 100% length generalization accuracy on the classic compositional table lookup task.
NDR's attention and gating patterns tend to be interpretable as an intuitive form of neural routing.
- Score: 8.424405898986118
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite successes across a broad range of applications, Transformers have
limited success in systematic generalization. The situation is especially
frustrating in the case of algorithmic tasks, where they often fail to find
intuitive solutions that route relevant information to the right node/operation
at the right time in the grid represented by Transformer columns. To facilitate
the learning of useful control flow, we propose two modifications to the
Transformer architecture, copy gate and geometric attention. Our novel Neural
Data Router (NDR) achieves 100% length generalization accuracy on the classic
compositional table lookup task, as well as near-perfect accuracy on the simple
arithmetic task and a new variant of ListOps testing for generalization across
computational depth. NDR's attention and gating patterns tend to be
interpretable as an intuitive form of neural routing. Our code is public.
Related papers
- Transformers meet Neural Algorithmic Reasoners [16.5785372289558]
We propose a novel approach that combines the Transformer's language understanding with the robustness of graph neural network (GNN)-based neural algorithmic reasoners (NARs)
We evaluate our resulting TransNAR model on CLRS-Text, the text-based version of the CLRS-30 benchmark, and demonstrate significant gains over Transformer-only models for algorithmic reasoning.
arXiv Detail & Related papers (2024-06-13T16:42:06Z) - Understanding Transformer Reasoning Capabilities via Graph Algorithms [25.08208816144745]
We study which transformer scaling regimes are able to perfectly solve different classes of algorithmic problems.
Our results show that transformers excel at many graph reasoning tasks, even outperforming specialized graph neural networks.
arXiv Detail & Related papers (2024-05-28T18:31:14Z) - What Algorithms can Transformers Learn? A Study in Length Generalization [23.970598914609916]
We study the scope of Transformers' abilities in the specific setting of length generalization on algorithmic tasks.
Specifically, we leverage RASP -- a programming language designed for the computational model of a Transformer.
Our work provides a novel perspective on the mechanisms of compositional generalization and the algorithmic capabilities of Transformers.
arXiv Detail & Related papers (2023-10-24T17:43:29Z) - Incrementally-Computable Neural Networks: Efficient Inference for
Dynamic Inputs [75.40636935415601]
Deep learning often faces the challenge of efficiently processing dynamic inputs, such as sensor data or user inputs.
We take an incremental computing approach, looking to reuse calculations as the inputs change.
We apply this approach to the transformers architecture, creating an efficient incremental inference algorithm with complexity proportional to the fraction of modified inputs.
arXiv Detail & Related papers (2023-07-27T16:30:27Z) - General Neural Gauge Fields [100.35916421218101]
We develop a learning framework to jointly optimize gauge transformations and neural fields.
We derive an information-invariant gauge transformation which allows to preserve scene information inherently and yield superior performance.
arXiv Detail & Related papers (2023-05-05T12:08:57Z) - Deep Transformers without Shortcuts: Modifying Self-attention for
Faithful Signal Propagation [105.22961467028234]
Skip connections and normalisation layers are ubiquitous for the training of Deep Neural Networks (DNNs)
Recent approaches such as Deep Kernel Shaping have made progress towards reducing our reliance on them.
But these approaches are incompatible with the self-attention layers present in transformers.
arXiv Detail & Related papers (2023-02-20T21:26:25Z) - Pathfinding Neural Cellular Automata [23.831530224401575]
Pathfinding is an important sub-component of a broad range of complex AI tasks, such as robot path planning, transport routing, and game playing.
We hand-code and learn models for Breadth-First Search (BFS), i.e. shortest path finding.
We present a neural implementation of Depth-First Search (DFS), and outline how it can be combined with neural BFS to produce an NCA for computing diameter of a graph.
We experiment with architectural modifications inspired by these hand-coded NCAs, training networks from scratch to solve the diameter problem on grid mazes while exhibiting strong ability generalization
arXiv Detail & Related papers (2023-01-17T11:45:51Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - CodedVTR: Codebook-based Sparse Voxel Transformer with Geometric
Guidance [22.39628991021092]
We propose CodedVTR (Codebook-based Voxel TRansformer) for 3D sparse voxel transformers.
On the one hand, we propose the codebook-based attention that projects an attention space into its subspace represented by the combination of "prototypes" in a learnable codebook.
On the other hand, we propose geometry-aware self-attention that utilizes geometric information (geometric pattern, density) to guide attention learning.
arXiv Detail & Related papers (2022-03-18T11:50:25Z) - Transformers Solve the Limited Receptive Field for Monocular Depth
Prediction [82.90445525977904]
We propose TransDepth, an architecture which benefits from both convolutional neural networks and transformers.
This is the first paper which applies transformers into pixel-wise prediction problems involving continuous labels.
arXiv Detail & Related papers (2021-03-22T18:00:13Z) - Neural Complexity Measures [96.06344259626127]
We propose Neural Complexity (NC), a meta-learning framework for predicting generalization.
Our model learns a scalar complexity measure through interactions with many heterogeneous tasks in a data-driven way.
arXiv Detail & Related papers (2020-08-07T02:12:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.