Fully neuromorphic vision and control for autonomous drone flight
- URL: http://arxiv.org/abs/2303.08778v1
- Date: Wed, 15 Mar 2023 17:19:45 GMT
- Title: Fully neuromorphic vision and control for autonomous drone flight
- Authors: Federico Paredes-Vall\'es, Jesse Hagenaars, Julien Dupeyroux, Stein
Stroobants, Yingfu Xu, Guido de Croon
- Abstract summary: Event-based vision and spiking neural hardware promises to exhibit similar characteristics.
Here, we present a fully learned neuromorphic pipeline for controlling a drone flying.
Results illustrate the potential of neuromorphic sensing and processing for enabling smaller network per flight.
- Score: 5.358212984063069
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Biological sensing and processing is asynchronous and sparse, leading to
low-latency and energy-efficient perception and action. In robotics,
neuromorphic hardware for event-based vision and spiking neural networks
promises to exhibit similar characteristics. However, robotic implementations
have been limited to basic tasks with low-dimensional sensory inputs and motor
actions due to the restricted network size in current embedded neuromorphic
processors and the difficulties of training spiking neural networks. Here, we
present the first fully neuromorphic vision-to-control pipeline for controlling
a freely flying drone. Specifically, we train a spiking neural network that
accepts high-dimensional raw event-based camera data and outputs low-level
control actions for performing autonomous vision-based flight. The vision part
of the network, consisting of five layers and 28.8k neurons, maps incoming raw
events to ego-motion estimates and is trained with self-supervised learning on
real event data. The control part consists of a single decoding layer and is
learned with an evolutionary algorithm in a drone simulator. Robotic
experiments show a successful sim-to-real transfer of the fully learned
neuromorphic pipeline. The drone can accurately follow different ego-motion
setpoints, allowing for hovering, landing, and maneuvering
sideways$\unicode{x2014}$even while yawing at the same time. The neuromorphic
pipeline runs on board on Intel's Loihi neuromorphic processor with an
execution frequency of 200 Hz, spending only 27 $\unicode{x00b5}$J per
inference. These results illustrate the potential of neuromorphic sensing and
processing for enabling smaller, more intelligent robots.
Related papers
- Spiking Neural Networks as a Controller for Emergent Swarm Agents [8.816729033097868]
Existing research explores the possible emergent behaviors in swarms of robots with only a binary sensor and a simple but hand-picked controller structure.
This paper investigates the feasibility of training spiking neural networks to find those local interaction rules that result in particular emergent behaviors.
arXiv Detail & Related papers (2024-10-21T16:41:35Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Evolved neuromorphic radar-based altitude controller for an autonomous
open-source blimp [4.350434044677268]
In this paper, we propose an evolved altitude controller based on an SNN for a robotic airship.
We also present an SNN-based controller architecture, an evolutionary framework for training the network in a simulated environment, and a control strategy for ameliorating the gap with reality.
arXiv Detail & Related papers (2021-10-01T20:48:43Z) - A toolbox for neuromorphic sensing in robotics [4.157415305926584]
We introduce a ROS (Robot Operating System) toolbox to encode and decode input signals coming from any type of sensor available on a robot.
This initiative is meant to stimulate and facilitate robotic integration of neuromorphic AI.
arXiv Detail & Related papers (2021-03-03T23:22:05Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z) - Robust trajectory generation for robotic control on the neuromorphic
research chip Loihi [0.0]
We exploit a recently developed spiking neural network model, the so-called anisotropic network.
We show that the anisotropic network on Loihi reliably encodes sequential patterns of neural activity.
Taken together, our study presents a new algorithm that allows the generation of complex robotic movements.
arXiv Detail & Related papers (2020-08-26T16:02:39Z) - Bio-inspired Gait Imitation of Hexapod Robot Using Event-Based Vision
Sensor and Spiking Neural Network [2.4603302139672003]
Some animals, like humans, imitate surrounding individuals to speed up their learning.
This complex problem of imitation-based learning forms associations between visual data and muscle actuation.
We propose a bio-inspired feed-forward approach based on neuromorphic computing and event-based vision to address the gait imitation problem.
arXiv Detail & Related papers (2020-04-11T17:55:34Z) - VGAI: End-to-End Learning of Vision-Based Decentralized Controllers for
Robot Swarms [237.25930757584047]
We propose to learn decentralized controllers based on solely raw visual inputs.
For the first time, that integrates the learning of two key components: communication and visual perception.
Our proposed learning framework combines a convolutional neural network (CNN) for each robot to extract messages from the visual inputs, and a graph neural network (GNN) over the entire swarm to transmit, receive and process these messages.
arXiv Detail & Related papers (2020-02-06T15:25:23Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.