A neural net architecture based on principles of neural plasticity and
development evolves to effectively catch prey in a simulated environment
- URL: http://arxiv.org/abs/2201.11742v2
- Date: Mon, 31 Jan 2022 01:52:42 GMT
- Title: A neural net architecture based on principles of neural plasticity and
development evolves to effectively catch prey in a simulated environment
- Authors: Addison Wood, Jory Schossau, Nick Sabaj, Richard Liu, Mark Reimers
- Abstract summary: A profound challenge for A-Life is to construct agents whose behavior is 'life-like' in a deep way.
We propose an architecture and approach to constructing networks driving artificial agents, using processes analogous to the processes that construct and sculpt the brains of animals.
We think this architecture may be useful for controlling small autonomous robots or drones, because it allows for a rapid response to changes in sensor inputs.
- Score: 2.834895018689047
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: A profound challenge for A-Life is to construct agents whose behavior is
'life-like' in a deep way. We propose an architecture and approach to
constructing networks driving artificial agents, using processes analogous to
the processes that construct and sculpt the brains of animals. Furthermore the
instantiation of action is dynamic: the whole network responds in real-time to
sensory inputs to activate effectors, rather than computing a representation of
the optimal behavior and sending off an encoded representation to effector
controllers. There are many parameters and we use an evolutionary algorithm to
select them, in the context of a specific prey-capture task. We think this
architecture may be useful for controlling small autonomous robots or drones,
because it allows for a rapid response to changes in sensor inputs.
Related papers
- Spiking Neural Networks as a Controller for Emergent Swarm Agents [8.816729033097868]
Existing research explores the possible emergent behaviors in swarms of robots with only a binary sensor and a simple but hand-picked controller structure.
This paper investigates the feasibility of training spiking neural networks to find those local interaction rules that result in particular emergent behaviors.
arXiv Detail & Related papers (2024-10-21T16:41:35Z) - No-brainer: Morphological Computation driven Adaptive Behavior in Soft Robots [0.24554686192257422]
We show that intelligent behavior can be created without a separate and explicit brain for robot control.
Specifically, we show that adaptive and complex behavior can be created in voxel-based virtual soft robots by using simple reactive materials.
arXiv Detail & Related papers (2024-07-23T16:20:36Z) - Artificial General Intelligence (AGI)-Native Wireless Systems: A Journey Beyond 6G [58.440115433585824]
Building future wireless systems that support services like digital twins (DTs) is challenging to achieve through advances to conventional technologies like meta-surfaces.
While artificial intelligence (AI)-native networks promise to overcome some limitations of wireless technologies, developments still rely on AI tools like neural networks.
This paper revisits the concept of AI-native wireless systems, equipping them with the common sense necessary to transform them into artificial general intelligence (AGI)-native systems.
arXiv Detail & Related papers (2024-04-29T04:51:05Z) - DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative
Diffusion Models [102.13968267347553]
We present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks.
We showcase a range of simulated and fabricated robots along with their capabilities.
arXiv Detail & Related papers (2023-11-28T18:58:48Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - Towards the Neuroevolution of Low-level Artificial General Intelligence [5.2611228017034435]
We argue that the search for Artificial General Intelligence (AGI) should start from a much lower level than human-level intelligence.
Our hypothesis is that learning occurs through sensory feedback when an agent acts in an environment.
We evaluate a method to evolve a biologically-inspired artificial neural network that learns from environment reactions.
arXiv Detail & Related papers (2022-07-27T15:30:50Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - A toolbox for neuromorphic sensing in robotics [4.157415305926584]
We introduce a ROS (Robot Operating System) toolbox to encode and decode input signals coming from any type of sensor available on a robot.
This initiative is meant to stimulate and facilitate robotic integration of neuromorphic AI.
arXiv Detail & Related papers (2021-03-03T23:22:05Z) - Neural Dynamic Policies for End-to-End Sensorimotor Learning [51.24542903398335]
The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces.
We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space.
NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks.
arXiv Detail & Related papers (2020-12-04T18:59:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.