A toolbox for neuromorphic sensing in robotics
- URL: http://arxiv.org/abs/2103.02751v2
- Date: Tue, 5 Oct 2021 09:02:53 GMT
- Title: A toolbox for neuromorphic sensing in robotics
- Authors: Julien Dupeyroux, Stein Stroobants, Guido de Croon
- Abstract summary: We introduce a ROS (Robot Operating System) toolbox to encode and decode input signals coming from any type of sensor available on a robot.
This initiative is meant to stimulate and facilitate robotic integration of neuromorphic AI.
- Score: 4.157415305926584
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The third generation of artificial intelligence (AI) introduced by
neuromorphic computing is revolutionizing the way robots and autonomous systems
can sense the world, process the information, and interact with their
environment. The promises of high flexibility, energy efficiency, and
robustness of neuromorphic systems is widely supported by software tools for
simulating spiking neural networks, and hardware integration (neuromorphic
processors). Yet, while efforts have been made on neuromorphic vision
(event-based cameras), it is worth noting that most of the sensors available
for robotics remain inherently incompatible with neuromorphic computing, where
information is encoded into spikes. To facilitate the use of traditional
sensors, we need to convert the output signals into streams of spikes, i.e., a
series of events (+1, -1) along with their corresponding timestamps. In this
paper, we propose a review of the coding algorithms from a robotics perspective
and further supported by a benchmark to assess their performance. We also
introduce a ROS (Robot Operating System) toolbox to encode and decode input
signals coming from any type of sensor available on a robot. This initiative is
meant to stimulate and facilitate robotic integration of neuromorphic AI, with
the opportunity to adapt traditional off-the-shelf sensors to spiking neural
nets within one of the most powerful robotic tools, ROS.
Related papers
- Digitizing Touch with an Artificial Multimodal Fingertip [51.7029315337739]
Humans and robots both benefit from using touch to perceive and interact with the surrounding environment.
Here, we describe several conceptual and technological innovations to improve the digitization of touch.
These advances are embodied in an artificial finger-shaped sensor with advanced sensing capabilities.
arXiv Detail & Related papers (2024-11-04T18:38:50Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - Fully Spiking Neural Network for Legged Robots [6.974746966671198]
Spiking Neural Network (SNN) for legged robots shows exceptional performance in simulated terrains.
SNNs provide natural advantages in inference speed and energy consumption.
This study presents a highly efficient SNN for legged robots that can be seamless integrated into other learning models.
arXiv Detail & Related papers (2023-10-08T05:48:30Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - Artificial Intelligence Enables Real-Time and Intuitive Control of
Prostheses via Nerve Interface [25.870454492249863]
The next generation prosthetic hand that moves and feels like a real hand requires a robust neural interconnection between the human minds and machines.
Here we present a neuroprosthetic system to demonstrate that principle by employing an artificial intelligence (AI) agent to translate the amputee's movement intent through a peripheral nerve interface.
arXiv Detail & Related papers (2022-03-16T14:33:38Z) - A neural net architecture based on principles of neural plasticity and
development evolves to effectively catch prey in a simulated environment [2.834895018689047]
A profound challenge for A-Life is to construct agents whose behavior is 'life-like' in a deep way.
We propose an architecture and approach to constructing networks driving artificial agents, using processes analogous to the processes that construct and sculpt the brains of animals.
We think this architecture may be useful for controlling small autonomous robots or drones, because it allows for a rapid response to changes in sensor inputs.
arXiv Detail & Related papers (2022-01-28T05:10:56Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z) - An Astrocyte-Modulated Neuromorphic Central Pattern Generator for
Hexapod Robot Locomotion on Intel's Loihi [0.0]
Locomotion is a crucial challenge for legged robots that is addressed "effortlessly" by biological networks abundant in nature, named central pattern generators (CPG)
Here, we propose a brain-morphic CPG controler based on a comprehensive spiking neural-astrocytic network that generates two gait patterns for a hexapod robot.
Our results pave the way for scaling this and other approaches towards Loihi-controlled locomotion in autonomous mobile robots.
arXiv Detail & Related papers (2020-06-08T17:35:48Z) - Populations of Spiking Neurons for Reservoir Computing: Closed Loop
Control of a Compliant Quadruped [64.64924554743982]
We present a framework for implementing central pattern generators with spiking neural networks to obtain closed loop robot control.
We demonstrate the learning of predefined gait patterns, speed control and gait transition on a simulated model of a compliant quadrupedal robot.
arXiv Detail & Related papers (2020-04-09T14:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.