Adaptive recurrent vision performs zero-shot computation scaling to
unseen difficulty levels
- URL: http://arxiv.org/abs/2311.06964v1
- Date: Sun, 12 Nov 2023 21:07:04 GMT
- Title: Adaptive recurrent vision performs zero-shot computation scaling to
unseen difficulty levels
- Authors: Vijay Veerabadran, Srinivas Ravishankar, Yuan Tang, Ritik Raina,
Virginia R. de Sa
- Abstract summary: We investigate whether adaptive computation can also enable vision models to extrapolate solutions beyond their training distribution's difficulty level.
We combine convolutional recurrent neural networks (ConvRNNs) with a learnable mechanism based on Graves: PathFinder and Mazes.
We show that AdRNNs learn to dynamically halt processing early (or late) to solve easier (or harder) problems, 2) these RNNs zero-shot generalize to more difficult problem settings not shown during training by dynamically increasing the number of recurrent at test time.
- Score: 6.053394076324473
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans solving algorithmic (or) reasoning problems typically exhibit solution
times that grow as a function of problem difficulty. Adaptive recurrent neural
networks have been shown to exhibit this property for various
language-processing tasks. However, little work has been performed to assess
whether such adaptive computation can also enable vision models to extrapolate
solutions beyond their training distribution's difficulty level, with prior
work focusing on very simple tasks. In this study, we investigate a critical
functional role of such adaptive processing using recurrent neural networks: to
dynamically scale computational resources conditional on input requirements
that allow for zero-shot generalization to novel difficulty levels not seen
during training using two challenging visual reasoning tasks: PathFinder and
Mazes. We combine convolutional recurrent neural networks (ConvRNNs) with a
learnable halting mechanism based on Graves (2016). We explore various
implementations of such adaptive ConvRNNs (AdRNNs) ranging from tying weights
across layers to more sophisticated biologically inspired recurrent networks
that possess lateral connections and gating. We show that 1) AdRNNs learn to
dynamically halt processing early (or late) to solve easier (or harder)
problems, 2) these RNNs zero-shot generalize to more difficult problem settings
not shown during training by dynamically increasing the number of recurrent
iterations at test time. Our study provides modeling evidence supporting the
hypothesis that recurrent processing enables the functional advantage of
adaptively allocating compute resources conditional on input requirements and
hence allowing generalization to harder difficulty levels of a visual reasoning
problem without training.
Related papers
- Convex Formulations for Training Two-Layer ReLU Neural Networks [21.88871868680998]
Non-layer, NP-hard optimization problems are crucial for machine learning models.
We introduce a semidefinite relaxation that can be solved in a finite-width two neural network.
arXiv Detail & Related papers (2024-10-29T17:53:15Z) - Measuring and Controlling Solution Degeneracy across Task-Trained Recurrent Neural Networks [2.184775414778289]
We provide a unified framework for analyzing degeneracy across three levels: behavior, neural dynamics, and weight space.
We analyzed RNNs trained on diverse tasks across machine learning and neuroscience domains.
arXiv Detail & Related papers (2024-10-04T23:23:55Z) - Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - Transferability of Convolutional Neural Networks in Stationary Learning
Tasks [96.00428692404354]
We introduce a novel framework for efficient training of convolutional neural networks (CNNs) for large-scale spatial problems.
We show that a CNN trained on small windows of such signals achieves a nearly performance on much larger windows without retraining.
Our results show that the CNN is able to tackle problems with many hundreds of agents after being trained with fewer than ten.
arXiv Detail & Related papers (2023-07-21T13:51:45Z) - Solving Large-scale Spatial Problems with Convolutional Neural Networks [88.31876586547848]
We employ transfer learning to improve training efficiency for large-scale spatial problems.
We propose that a convolutional neural network (CNN) can be trained on small windows of signals, but evaluated on arbitrarily large signals with little to no performance degradation.
arXiv Detail & Related papers (2023-06-14T01:24:42Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Recognizing and Verifying Mathematical Equations using Multiplicative
Differential Neural Units [86.9207811656179]
We show that memory-augmented neural networks (NNs) can achieve higher-order, memory-augmented extrapolation, stable performance, and faster convergence.
Our models achieve a 1.53% average improvement over current state-of-the-art methods in equation verification and achieve a 2.22% Top-1 average accuracy and 2.96% Top-5 average accuracy for equation completion.
arXiv Detail & Related papers (2021-04-07T03:50:11Z) - Thinking Deeply with Recurrence: Generalizing from Easy to Hard
Sequential Reasoning Problems [51.132938969015825]
We observe that recurrent networks have the uncanny ability to closely emulate the behavior of non-recurrent deep models.
We show that recurrent networks that are trained to solve simple mazes with few recurrent steps can indeed solve much more complex problems simply by performing additional recurrences during inference.
arXiv Detail & Related papers (2021-02-22T14:09:20Z) - A Principle of Least Action for the Training of Neural Networks [10.342408668490975]
We show the presence of a low kinetic energy displacement bias in the transport map of the network, and link this bias with generalization performance.
We propose a new learning algorithm, which automatically adapts to the complexity of the given task, and leads to networks with a high generalization ability even in low data regimes.
arXiv Detail & Related papers (2020-09-17T15:37:34Z) - Exploring weight initialization, diversity of solutions, and degradation
in recurrent neural networks trained for temporal and decision-making tasks [0.0]
Recurrent Neural Networks (RNNs) are frequently used to model aspects of brain function and structure.
In this work, we trained small fully-connected RNNs to perform temporal and flow control tasks with time-varying stimuli.
arXiv Detail & Related papers (2019-06-03T21:56:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.