Bio-inspired Gait Imitation of Hexapod Robot Using Event-Based Vision
Sensor and Spiking Neural Network
- URL: http://arxiv.org/abs/2004.05450v1
- Date: Sat, 11 Apr 2020 17:55:34 GMT
- Title: Bio-inspired Gait Imitation of Hexapod Robot Using Event-Based Vision
Sensor and Spiking Neural Network
- Authors: Justin Ting, Yan Fang, Ashwin Sanjay Lele, Arijit Raychowdhury
- Abstract summary: Some animals, like humans, imitate surrounding individuals to speed up their learning.
This complex problem of imitation-based learning forms associations between visual data and muscle actuation.
We propose a bio-inspired feed-forward approach based on neuromorphic computing and event-based vision to address the gait imitation problem.
- Score: 2.4603302139672003
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning how to walk is a sophisticated neurological task for most animals.
In order to walk, the brain must synthesize multiple cortices, neural circuits,
and diverse sensory inputs. Some animals, like humans, imitate surrounding
individuals to speed up their learning. When humans watch their peers, visual
data is processed through a visual cortex in the brain. This complex problem of
imitation-based learning forms associations between visual data and muscle
actuation through Central Pattern Generation (CPG). Reproducing this imitation
phenomenon on low power, energy-constrained robots that are learning to walk
remains challenging and unexplored. We propose a bio-inspired feed-forward
approach based on neuromorphic computing and event-based vision to address the
gait imitation problem. The proposed method trains a "student" hexapod to walk
by watching an "expert" hexapod moving its legs. The student processes the flow
of Dynamic Vision Sensor (DVS) data with a one-layer Spiking Neural Network
(SNN). The SNN of the student successfully imitates the expert within a small
convergence time of ten iterations and exhibits energy efficiency at the
sub-microjoule level.
Related papers
- Orangutan: A Multiscale Brain Emulation-Based Artificial Intelligence Framework for Dynamic Environments [2.8137865669570297]
This paper introduces a novel brain-inspired AI framework, Orangutan.
It simulates the structure and computational mechanisms of biological brains on multiple scales.
I have developed a sensorimotor model that simulates human saccadic eye movements during object observation.
arXiv Detail & Related papers (2024-06-18T01:41:57Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Achieving More Human Brain-Like Vision via Human EEG Representational Alignment [1.811217832697894]
We present 'Re(presentational)Al(ignment)net', a vision model aligned with human brain activity based on non-invasive EEG.
Our innovative image-to-brain multi-layer encoding framework advances human neural alignment by optimizing multiple model layers.
Our findings suggest that ReAlnet represents a breakthrough in bridging the gap between artificial and human vision, and paving the way for more brain-like artificial intelligence systems.
arXiv Detail & Related papers (2024-01-30T18:18:41Z) - Neural feels with neural fields: Visuo-tactile perception for in-hand
manipulation [57.60490773016364]
We combine vision and touch sensing on a multi-fingered hand to estimate an object's pose and shape during in-hand manipulation.
Our method, NeuralFeels, encodes object geometry by learning a neural field online and jointly tracks it by optimizing a pose graph problem.
Our results demonstrate that touch, at the very least, refines and, at the very best, disambiguates visual estimates during in-hand manipulation.
arXiv Detail & Related papers (2023-12-20T22:36:37Z) - Fully neuromorphic vision and control for autonomous drone flight [5.358212984063069]
Event-based vision and spiking neural hardware promises to exhibit similar characteristics.
Here, we present a fully learned neuromorphic pipeline for controlling a drone flying.
Results illustrate the potential of neuromorphic sensing and processing for enabling smaller network per flight.
arXiv Detail & Related papers (2023-03-15T17:19:45Z) - Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses [68.96380145211093]
Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
arXiv Detail & Related papers (2022-09-27T17:33:19Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Deep Auto-encoder with Neural Response [8.797970797884023]
We propose a hybrid model, called deep auto-encoder with the neural response (DAE-NR)
The DAE-NR incorporates the information from the visual cortex into ANNs to achieve better image reconstruction and higher neural representation similarity between biological and artificial neurons.
Our experiments demonstrate that if and only if with the joint learning, DAE-NRs can (i.e., improve the performance of image reconstruction) and (ii. increase the representational similarity between biological neurons and artificial neurons.
arXiv Detail & Related papers (2021-11-30T11:44:17Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Neuroevolution of a Recurrent Neural Network for Spatial and Working
Memory in a Simulated Robotic Environment [57.91534223695695]
We evolved weights in a biologically plausible recurrent neural network (RNN) using an evolutionary algorithm to replicate the behavior and neural activity observed in rats.
Our method demonstrates how the dynamic activity in evolved RNNs can capture interesting and complex cognitive behavior.
arXiv Detail & Related papers (2021-02-25T02:13:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.