The Sensory Neuron as a Transformer: Permutation-Invariant Neural
Networks for Reinforcement Learning
- URL: http://arxiv.org/abs/2109.02869v2
- Date: Wed, 29 Sep 2021 00:59:49 GMT
- Title: The Sensory Neuron as a Transformer: Permutation-Invariant Neural
Networks for Reinforcement Learning
- Authors: Yujin Tang and David Ha
- Abstract summary: We build systems that feed each sensory input from the environment into distinct, but identical neural networks.
We show that these sensory networks can be trained to integrate information received locally, and through communication via an attention mechanism, can collectively produce a globally coherent policy.
- Score: 11.247894240593691
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In complex systems, we often observe complex global behavior emerge from a
collection of agents interacting with each other in their environment, with
each individual agent acting only on locally available information, without
knowing the full picture. Such systems have inspired development of artificial
intelligence algorithms in areas such as swarm optimization and cellular
automata. Motivated by the emergence of collective behavior from complex
cellular systems, we build systems that feed each sensory input from the
environment into distinct, but identical neural networks, each with no fixed
relationship with one another. We show that these sensory networks can be
trained to integrate information received locally, and through communication
via an attention mechanism, can collectively produce a globally coherent
policy. Moreover, the system can still perform its task even if the ordering of
its inputs is randomly permuted several times during an episode. These
permutation invariant systems also display useful robustness and generalization
properties that are broadly applicable. Interactive demo and videos of our
results: https://attentionneuron.github.io/
Related papers
- Evolving Neural Networks Reveal Emergent Collective Behavior from Minimal Agent Interactions [0.0]
We investigate how neural networks evolve to control agents' behavior in a dynamic environment.
Simpler behaviors, such as lane formation and laminar flow, are characterized by more linear network operations.
Specific environmental parameters, such as moderate noise, broader field of view, and lower agent density, promote the evolution of non-linear networks.
arXiv Detail & Related papers (2024-10-25T17:43:00Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Inferring Relational Potentials in Interacting Systems [56.498417950856904]
We propose Neural Interaction Inference with Potentials (NIIP) as an alternative approach to discover such interactions.
NIIP assigns low energy to the subset of trajectories which respect the relational constraints observed.
It allows trajectory manipulation, such as interchanging interaction types across separately trained models, as well as trajectory forecasting.
arXiv Detail & Related papers (2023-10-23T00:44:17Z) - Unsupervised Learning of Invariance Transformations [105.54048699217668]
We develop an algorithmic framework for finding approximate graph automorphisms.
We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.
arXiv Detail & Related papers (2023-07-24T17:03:28Z) - Seeing the forest and the tree: Building representations of both
individual and collective dynamics with transformers [6.543007700542191]
We present a novel transformer architecture for learning from time-varying data.
We show that our model can be applied to successfully recover complex interactions and dynamics in many-body systems.
Our results show that it is possible to learn from neurons in one animal's brain and transfer the model on neurons in a different animal's brain, with interpretable neuron correspondence across sets and animals.
arXiv Detail & Related papers (2022-06-10T07:14:57Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - A Framework for Learning Invariant Physical Relations in Multimodal
Sensory Processing [0.0]
We design a novel neural network architecture capable of learning, in an unsupervised manner, relations among sensory cues.
We describe the core system functionality when learning arbitrary non-linear relations in low-dimensional sensory data.
We demonstrate this through a real-world learning problem, where, from standard RGB camera frames, the network learns the relations between physical quantities.
arXiv Detail & Related papers (2020-06-30T08:42:48Z) - Teaching Recurrent Neural Networks to Modify Chaotic Memories by Example [14.91507266777207]
We show that a recurrent neural network can learn to modify its representation of complex information using only examples.
We provide a mechanism for how these computations are learned, and demonstrate that a single network can simultaneously learn multiple computations.
arXiv Detail & Related papers (2020-05-03T20:51:46Z) - Deep learning reveals hidden interactions in complex systems [0.0]
AgentNet is a model-free data-driven framework consisting of deep neural networks to reveal hidden interactions in complex systems.
A demonstration with empirical data from a flock of birds showed that AgentNet could identify hidden interaction ranges exhibited by real birds.
arXiv Detail & Related papers (2020-01-03T02:25:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.